|
|
|
|
|
|
|
|
|
|
xen-devel
[Xen-devel] Re: poor domU VBD performance.
Andrew Theurer <habanero <at> us.ibm.com> writes:
>
> On Monday 28 March 2005 14:14, Ian Pratt wrote:
> > > > > > I found out that dom0 does file-system IO and raw IO ( using
> > > > > > dd as a tool to test
> > > > > > throughput from the disk ) is about exactly the same as when
> > > > > > using a standard
> > > > > > linux kernel without XEN. But the raw IO from DomU to an
> > > > > > unused disk ( a second
> > > > > > disk in the system ) is limited to fourty percent of the
> > > > > > speed I get within Dom0.
> > >
> > > Is the second disk exactly the same as the first one? I'll
> > > try an IO test
> > > here on the same disk array with dom0 and domU and see what I get.
> >
> > I've reproduced the problem and its a real issue.
> > It only affects reads, and is almost certainly down to how the blkback
> > driver passes requests down to the actual device.
> >
> > Does anyone on the list actually understand the changes made to linux
> > block IO between 2.4 and 2.6?
> >
> > In the 2.6 blkfront there is no run_task_queue() to flush requests to
> > the lower layer, and we use submit_bio() instead of 2.4's
> > generic_make_request(). It looks like this is happening syncronously
> > rather than queueing multiple requests. What should we be doing to cause
> > things to be batched?
>
> To my knowlege you cannot queue multiple bio requests at once. The IO
> schedulers should batch them up before submitting to the actual devices. I
> tried xen-2.0.5 and xen-unstable with a sequential read test using 256k
> request size and 8 reader threads with o_direct on a lvm-raid-0 scsci array
> (no HW cache) and got:
>
> xen-2-dom0-2.6.10: 177 MB/sec
> xen-2-domU-2.6.10: 185 MB/sec
> xen-3-dom0-2.6.11: 177 MB/sec
> xen-3-domU-2.6.11: 185 MB/sec
>
> Better results with VBD :) I am wondering if going through 2 layers of IO
> schedulers streams the IO better. I was using AS scheduler. I am going to
> try noop scheduler and see what i get.
>
> What block size were you using with dd?
>
> -Andrew
>
My dd command was always the same: "dd if=/dev/hdb6 bs=64k count=1000" and it
took 1.6 seconds on hdb6 and 2.2 seconds on hda1 when running in Dom0 and it
took 4.6 seconds on hdb6 and 5.8 seconds on hda1 when running on DomU. I did
one experiment with count=10000 and it took ten times as long in each of the
four cases.
I have done the following tests:
DomU : dd if=/dev/hdb6 of=/dev/null bs=1024k count=4000 ; duration 301 sec
DomU : dd if=/dev/hdb6 of=/dev/null bs=1024k count=4000 ; duration 370 sec
Dom0 : dd if=/dev/hdb6 of=/dev/null bs=1024k count=4000 ; duration 115 sec
Dom0 : dd if=/dev/hda1 of=/dev/null bs=1024k count=4000 ; duration 140 sec
Peter
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|