This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] Re: poor domU VBD performance.

To: "Ian Pratt" <m+Ian.Pratt@xxxxxxxxxxxx>, "Peter Bier" <peter_bier@xxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] Re: poor domU VBD performance.
From: Andrew Theurer <habanero@xxxxxxxxxx>
Date: Tue, 29 Mar 2005 12:39:42 -0600
Delivery-date: Tue, 29 Mar 2005 18:39:49 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <A95E2296287EAD4EB592B5DEEFCE0E9D1E38DB@xxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <A95E2296287EAD4EB592B5DEEFCE0E9D1E38DB@xxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: KMail/1.5
On Tuesday 29 March 2005 02:13, Ian Pratt wrote:
> > It looks like there might be a problem were we are not
> > getting a timely
> > response back from dom0 VBD driver that the io request is
> > complete, which
> > limits the number of outstanding requests to a level which
> > cannot keep the
> > disk utilized well.  If you drive enough IO outstanding
> > requests (which can
> > be done with either o-direct with large request or a much
> > larger readahead
> > setting with buffered IO), it's not an issue.
> Andrew, please could you try this with a 2.4 dom0, 2.6 domU.

2.4 might be a little while for me, as I an running Fedora core3 with udev.  
If anyone has any easy way to get around the hotplug/udev stuff, then I can 
do this.

I did run a sequential read on a single disk again (using noop IO schedulers 
in both domains) with various request sizes with o_direct while capturing 
iostsat output.  The results are interesting.  I have included the data in a 
file because it would just line wrap an be unreadable in this email text.  
Notice the service commit times for domU tests.  It's like the IO request 
queue is being plugged for a minimum of 10ms in dom0.  Merges happening for 
>4K requests in dom0 (while hosting domU's IO) seem to support this.


Attachment: rawio-comp
Description: Text document

Xen-devel mailing list