[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Re: poor domU VBD performance.



On Tuesday 29 March 2005 02:13, Ian Pratt wrote:
> > It looks like there might be a problem were we are not
> > getting a timely
> > response back from dom0 VBD driver that the io request is
> > complete, which
> > limits the number of outstanding requests to a level which
> > cannot keep the
> > disk utilized well.  If you drive enough IO outstanding
> > requests (which can
> > be done with either o-direct with large request or a much
> > larger readahead
> > setting with buffered IO), it's not an issue.
>
> Andrew, please could you try this with a 2.4 dom0, 2.6 domU.

2.4 might be a little while for me, as I an running Fedora core3 with udev.  
If anyone has any easy way to get around the hotplug/udev stuff, then I can 
do this.

I did run a sequential read on a single disk again (using noop IO schedulers 
in both domains) with various request sizes with o_direct while capturing 
iostsat output.  The results are interesting.  I have included the data in a 
file because it would just line wrap an be unreadable in this email text.  
Notice the service commit times for domU tests.  It's like the IO request 
queue is being plugged for a minimum of 10ms in dom0.  Merges happening for 
>4K requests in dom0 (while hosting domU's IO) seem to support this.

-Andrew

Attachment: rawio-comp
Description: Text document

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.