[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: RE: RE: [Xen-devel] poor domU VBD performance.



On Wed, Mar 30 2005, Ian Pratt wrote:
> > I'll check the xen block driver to see if there's anything 
> > else that sticks out.
> >
> > Jens Axboe
> 
> Jens, I'd really appreciate this.
> 
> The blkfront/blkback drivers have rather evolved over time, and I don't
> think any of the core team fully understand the block-layer differences
> between 2.4 and 2.6. 
> 
> There's also some junk left in there from when the backend was in Xen
> itself back in the days of 1.2, though Vincent has prepared a patch to
> clean this up and also make 'refreshing' of vbd's work (for size
> changes), and also allow the blkfront driver to import whole disks
> rather than paritions. We had this functionality on 2.4, but lost it in
> the move to 2.6.
> 
> My bet is that it's the 2.6 backend that is where the true perofrmance
> bug lies. Using a 2.6 domU blkfront talking to a 2.4 dom0 blkback seems
> to give good performance under a wide variety of circumstances. Using a
> 2.6 dom0 is far more pernickety. I agree with Andrew that I suspect it's
> the work queue changes are biting us when we don't have many outstanding
> requests.

You never schedule the queues you submit the io against for the 2.6
kernel, you only have a tq_disk run for 2.4 kernels. This basically puts
you at the mercy of the timeout unplugging, which is really suboptimal
unless you can keep the io queue of the target busy at all times.

You need to either mark the last bio going to that device as BIO_SYNC,
or do a blk_run_queue() on the target queue after having submitted all
io in this batch for it.

-- 
Jens Axboe


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.