[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] poor domU VBD performance.



On Monday 28 March 2005 14:14, Ian Pratt wrote:
> > > > > I found out that dom0 does file-system IO and raw IO ( using
> > > > > dd as a tool to test
> > > > > throughput from the disk ) is about exactly the same as when
> > > > > using a standard
> > > > > linux kernel without XEN. But the raw IO from DomU to an
> > > > > unused disk ( a second
> > > > > disk in the system ) is limited to fourty percent of the
> > > > > speed I get within Dom0.
> >
> > Is the second disk exactly the same as the first one?  I'll
> > try an IO test
> > here on the same disk array with dom0 and domU and see what I get.
>
> I've reproduced the problem and its a real issue.
>
> It only affects reads, and is almost certainly down to how the blkback
> driver passes requests down to the actual device.
>
> Does anyone on the list actually understand the changes made to linux
> block IO between 2.4 and 2.6?
>
> In the 2.6 blkfront there is no run_task_queue() to flush requests to
> the lower layer, and we use submit_bio() instead of 2.4's
> generic_make_request(). It looks like this is happening syncronously
> rather than queueing multiple requests. What should we be doing to cause
> things to be batched?

There are multiple IO schedulers in 2.6.  Do you know which one is being used?  
It should say somewhere in the boot log.  Some read-ahead code also changed 
in 2.6.10-11 range.

So far I have not been able to reproduce this in xen-unstable with 2.6.  I am 
building xen-2.0.5 for a look.

-Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.