[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH][RFC] open HVM backing storage with O_SYNC


  • To: "Rik van Riel" <riel@xxxxxxxxxx>
  • From: "Christian Limpach" <christian.limpach@xxxxxxxxx>
  • Date: Sat, 29 Jul 2006 01:44:59 +0100
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Fri, 28 Jul 2006 17:45:24 -0700
  • Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=Qsho9PY5IWqIRN5nNa4k7rWsofCOLJgDhWZczzYyLqeLgCILdMyZN5aAxe6W7bPyQ/EW0hu70hp8vPSKLevDzwKluBbpsRxHLSqaDU/0XyEw78oJhhSa35jd8ryxl/WbgEo5/Q/24mjYsZZzO966SMqmCyUy+mxBXEHgarNZ1Ik=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

On 7/28/06, Rik van Riel <riel@xxxxxxxxxx> wrote:
Rik van Riel wrote:
> Rik van Riel wrote:
>
>> Any comments on this patch?
>
> I got some comments from Alan, who would like to see this behaviour
> tunable with hdparm from inside the guest.  This requires larger
> qemu changes though, to be specific an ->fsync callback into each
> of the backing store drivers, so that is something for the qemu
> mailing list.

Considering the AIO-based development going on in the qemu community,
I think we should stick with the O_SYNC band-aid.  The idea Alan
described would just be a fancier band-aid.

Another possibility would be to integrate blktap/tapdisk into qemu
which will provide asynchronous completion events and hides the
immediate AIO interaction from qemu.  This should also make using qemu
inside a stub domain easier since the code to talk to tapdisk will be
very similar to the blkfront code.  Also, this is somewhat required to
use tap devices for HVM domains, the alternative of using blkfront
within dom0 to export the device for qemu to use doesn't sound too
appealing.

Do you fancy looking into this?

> The current bottleneck seems to be that MAX_MULT_COUNT is only 16.

Upon closer inspection of the code, this seems to not be the case for
LBA48 transfers.

Any other ideas what could be the bottleneck then?

   christian

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.