This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] Re: poor domU VBD performance.

> My dd command was always the same: "dd if=/dev/hdb6 bs=64k count=1000" and
> it took 1.6 seconds on hdb6 and 2.2 seconds on hda1 when running in Dom0
> and it took 4.6 seconds on hdb6 and 5.8 seconds on hda1 when running on
> DomU. I did one experiment with count=10000 and it took ten times as long
> in each of the four cases.
> I have done the following tests:
> DomU : dd if=/dev/hdb6 of=/dev/null bs=1024k count=4000 ; duration 301 sec
> DomU : dd if=/dev/hdb6 of=/dev/null bs=1024k count=4000 ; duration 370 sec
> Dom0 : dd if=/dev/hdb6 of=/dev/null bs=1024k count=4000 ; duration 115 sec
> Dom0 : dd if=/dev/hda1 of=/dev/null bs=1024k count=4000 ; duration 140 sec

OK, I have produced this with both dd and o-direct now.  On o-direct, I needed 
what was the effective dd block request size (128k) and I got similar 
results.  My results are much worse, due to that I am driving 14 disks:

dom0:   153.5 MB/sec
domU:    12.7 MB/sec

It looks like there might be a problem were we are not getting a timely 
response back from dom0 VBD driver that the io request is complete, which 
limits the number of outstanding requests to a level which cannot keep the 
disk utilized well.  If you drive enough IO outstanding requests (which can 
be done with either o-direct with large request or a much larger readahead 
setting with buffered IO), it's not an issue. 

In the domU, can you try setting the readahead size to a much larger value 
using hdparm? Something like hdparm -a 2028, then run dd?


Xen-devel mailing list