This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] [PATCH][RFC] open HVM backing storage with O_SYNC

To: "Rik van Riel" <riel@xxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH][RFC] open HVM backing storage with O_SYNC
From: "Christian Limpach" <christian.limpach@xxxxxxxxx>
Date: Sat, 29 Jul 2006 01:44:59 +0100
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Fri, 28 Jul 2006 17:45:24 -0700
Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=Qsho9PY5IWqIRN5nNa4k7rWsofCOLJgDhWZczzYyLqeLgCILdMyZN5aAxe6W7bPyQ/EW0hu70hp8vPSKLevDzwKluBbpsRxHLSqaDU/0XyEw78oJhhSa35jd8ryxl/WbgEo5/Q/24mjYsZZzO966SMqmCyUy+mxBXEHgarNZ1Ik=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <44CA71C9.1040408@xxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <44C9B75B.7060809@xxxxxxxxxx> <44CA4330.7010007@xxxxxxxxxx> <44CA71C9.1040408@xxxxxxxxxx>
Reply-to: Christian.Limpach@xxxxxxxxxxxx
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
On 7/28/06, Rik van Riel <riel@xxxxxxxxxx> wrote:
Rik van Riel wrote:
> Rik van Riel wrote:
>> Any comments on this patch?
> I got some comments from Alan, who would like to see this behaviour
> tunable with hdparm from inside the guest.  This requires larger
> qemu changes though, to be specific an ->fsync callback into each
> of the backing store drivers, so that is something for the qemu
> mailing list.

Considering the AIO-based development going on in the qemu community,
I think we should stick with the O_SYNC band-aid.  The idea Alan
described would just be a fancier band-aid.

Another possibility would be to integrate blktap/tapdisk into qemu
which will provide asynchronous completion events and hides the
immediate AIO interaction from qemu.  This should also make using qemu
inside a stub domain easier since the code to talk to tapdisk will be
very similar to the blkfront code.  Also, this is somewhat required to
use tap devices for HVM domains, the alternative of using blkfront
within dom0 to export the device for qemu to use doesn't sound too

Do you fancy looking into this?

> The current bottleneck seems to be that MAX_MULT_COUNT is only 16.

Upon closer inspection of the code, this seems to not be the case for
LBA48 transfers.

Any other ideas what could be the bottleneck then?


Xen-devel mailing list