[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: RE: RE: [Xen-devel] poor domU VBD performance.



Hi Ian,

On Tue, Mar 29, 2005 at 07:09:50PM +0100, Ian Pratt wrote:
> We'd really appreciate your help on this, or from someone else at SuSE
> who actually understands the Linux block layer?

I'm Cc'ing Jens ...
 
> In the 2.6 blkfront driver, what scheduler should we be registering
> with? What should we be setting as max_sectors? Are there other
> parameters we should be setting that we aren't? (block size?)

I think noop is a good choice for secondary domains, as you don't
want to be too clever there, otherwise you stack a clever scheduler
on top of a clever scheduler. noop basically only does front- and
backmerging to make the request sizes larger.

But you probably should initialize the readahead sectors.

Please test attached patch.

It fixed the problem for me, but my testing was very limited,
I only had a small loopback mounted root fs to test with quickly.

Note that initializing to 256 (128k) would be OK as well (and might 
be the better default); it seems to be set to 256 (128k) by default, 
but it's not ... If you explicitly set it to 256, the performance 
still increases tremendously.

> In the blkback driver that actually issues the IO's in dom0, is there
> something we should be doing to cause IOs to get batched? In 2.4 we used
> a task_queue to push the IO through to the disk having queued it with
> generic_make_request(). In 2.6 we're currently using submit_bio() and
> just hoping that batching happens.

I don't think the blkback driver does anything wrong here.

Regards,
-- 
Kurt Garloff, Director SUSE Labs, Novell Inc.

Attachment: xen-blkfront-ra.diff
Description: Text document

Attachment: pgpX4PzDwiK16.pgp
Description: PGP signature

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.