This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] How to Allocate Disk Bandwidth among VMs?

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-devel] How to Allocate Disk Bandwidth among VMs?
From: Tom Creck <tom-xen@xxxxxxxxxxx>
Date: Tue, 16 Jun 2009 01:36:54 -0700 (PDT)
Delivery-date: Tue, 16 Jun 2009 01:37:42 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20090616.104538.189700018.ryov@xxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <23993255.post@xxxxxxxxxxxxxxx> <20090616.104538.189700018.ryov@xxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thank you a lot for your reply. I will try dm-ioband for Xen disk I/O QoS

Actually, I'm also surveying ways of Xen disk I/O QoS. There are two kinds
of solution as I see.
One is at the frontend/backend driver level. It avoids replying on any
specific guest OS.
There are two patches for this same purpose.
The first is Xen I/O manager.  

The second is a token-based resource limitation in backend driver 

However, my lastest Xen 3.3.0 doesn't include either. I wonder does Xen
3.3.0 incorporate any disk I/O QoS mechanism inside?

The second kind of solution, as the one you suggest, bases on Domain0 OS
kernel's scheduling of backend driver's kernel thread, such as dm-ioband and
ionice for per-process scheduling.

I tried ionice, but got no effect. I don't know why.

Is there any other idea or implementation for disk I/O QoS in Xen?

Ryo Tsuruta wrote:
> Hi Tom,
> # I'm sorry, I sent an empty e-mail a while ago.
>>       I want to do disk I/O rate control over VMs. Therefore, I want to
>> allocate different disk I/O bandwidth for different Xen VMs on my host
>> machine. All domainUs use file-backed VBDs stored in domain0's file
>> system.
>>       Do you have any idea to do that? Hopefully it can be done by
>> modifying
>> Xend in domain0.
> You can use dm-ioband for this purpose. dm-ioband is an I/O bandwidth
> controller implemented as a device-mapper driver and can control
> bandwidth on per partition, per user, per process basis.
> In this case, install dm-ioband to the host OS, create a
> dm-ioband device on the disk which stores domainU's VBD files,
> and then assign bandwidth(determined proportional to the weight of
> each disk) to each virtual machine.
> There is an example configuration available at:
> "Example #5: Bandwidth control for Xen blktap devices"
> http://sourceforge.net/apps/trac/ioband/wiki/dm-ioband/man/examples
> Please see the following URL for more information, kernel patch files
> and binary packages for RHEL5 and CentOS5 are availble.
> http://sourceforge.net/apps/trac/ioband/wiki/dm-ioband
> Please feel free to ask me if you have any questions.
> Thanks,
> Ryo Tsuruta
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel

View this message in context: 
Sent from the Xen - Dev mailing list archive at Nabble.com.

Xen-devel mailing list