WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Re: [Xen-devel] VM disk I/O limit patch

To: Andrew Xu <xu.an@xxxxxxxxxx>
Subject: Re: [Xen-users] Re: [Xen-devel] VM disk I/O limit patch
From: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Date: Wed, 22 Jun 2011 10:39:01 -0400
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Wed, 22 Jun 2011 07:39:48 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20110622221248.8CF3.3A8D29D5@xxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <20110622200623.8CE4.3A8D29D5@xxxxxxxxxx> <20110622131121.GB8216@xxxxxxxxxxxx> <20110622221248.8CF3.3A8D29D5@xxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.21 (2010-09-15)
> > I am not convienced this will be easier to maintain than
> > using existing code (dm-ioband) that Linux kernel provides already.
> > 
> > Are there other technical reasons 'dm-ioband' is not sufficient
> > enough? Could it be possible to fix 'dm-ioband' to not have those
> > bugs? Florian mentioned flush requests not passing through
> > the DM layers but I am pretty sure those have been fixed.
> > 
> I don't find dm-ioband's bug, so I can't answer your question.
> 
> But, xen-vm-disk I/O limitation shoud done by xen module, is not it?

Not all I/O's for a guest go through the xen-blkback. For example
the tap, qcow, are all inside of QEMU which has no notion of xen-blkback
(this is upstream QEMU BTW) - so they would not benefit from this
at all.

While if all of the "disks" that are assigned to a guest go
through dm-ioband the system admin has only one interface that can cover
_all_ of the I/O sinks. And it is standard enough so that if that
system admin is switching over from KVM to Xen they don't have to
alter a lot of their infrastructure.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel