[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Re: [Qemu-devel] [PATCH] qemu and qemu-xen: support empty write barriers in xen_disk



On Wed, 24 Nov 2010, Christoph Hellwig wrote:
> On Wed, Nov 24, 2010 at 10:18:40AM -0800, Jeremy Fitzhardinge wrote:
> > Linux wants is a useful thing to do and implement (especially since it
> > amounts to standardising the ?BSD extension).  I'm not sure of their
> > precise semantics (esp WRT ordering), but I think its already OK.
> 
> The nice bit is that a pure flush does not imply any odering at all.
> Which is how the current qemu driver implements the barrier requests
> anyway, so that needs some fixing.
> 
> > (BTW, in case it wasn't clear, we're seriously considering - but not yet
> > committed to - using qemu as the primary PV block backend for Xen
> > instead of submitting the existing blkback code for upstream.  We still
> > need to do some proper testing and measuring to make sure it stacks up
> > OK, and work out how it would fit together with the rest of the
> > management stack.  But so far it looks promising.)
> 
> Good to know.  Besides the issue with barriers mentioned above there's
> a few things that need addressing in xen_disk, if you (or Stefano or
> Daniel) are interested:
> 
>  - remove the syncwrite tunable, as this is handled by the underlying
>    posix I/O code if needed by using O_DSYNC which is a lot more
>    efficient.
>  - check whatever the issue with the use_aio codepath is and make it
>    the default.  It should help the performance a lot.
>  - Make sure to use bdrv_aio_flush for cache flushes in the aio
>    codepath, currently it still uses plain synchronous flushes.
 
all very good suggestions, I am adding them to my todo list, but Daniel
is very welcome to contribute as well :)

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.