This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] Re: VT is comically slow

To: "alex@xxxxxxxxxxxxxxx" <alex@xxxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] Re: VT is comically slow
From: "Andrew Warfield" <andrew.warfield@xxxxxxxxxxxx>
Date: Thu, 6 Jul 2006 19:01:14 -0700
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Thu, 06 Jul 2006 19:01:38 -0700
Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:sender:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references:x-google-sender-auth; b=XjbOPE54VMLnWYDMddcf4NodtNVhe0hVMiTaR13gF2hGI43YSyH8hCNoQYhVHGFerFsERKICdAfkntd6f9Mh3iubrxU+EuLAEf8vKFyhMb7T63Fv4FPwB+P4nuptwfFyUbltZpAwFhw/LlDmYqa8AIzjoW+T6JWst8mhV5j9MzI=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <20060707014350.19F2C2F91A@xxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <20060707014350.19F2C2F91A@xxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
The QEMU code that we use doesn't go through the dom0 buffer cache, we modified 
code to use O_DIRECT.  Can't user buffer cache and accelerated drivers (they go 
to the disk) together, it can cause disk corruption.  The performance numbers 
we get
from this version of QEMU is still 4 to 6 times slower that native disk I/O.

I doubt O_DIRECT buys you much in the way of performance -- as you say
it's just a correctness thing.  Still, the qemu block code is all
completely synchronous -- the fact that you simply can't have more
than a single outstanding block request at a time is going to
seriously harm performance.  As Anthony explained, some form of
asynchronous IO in the qemu drivers would clearly improve performance.

You might be right, however even with pipelining and async I/O, I don't think 
it is going to get close to native I/O numbers.  I guess we'll just have to wait
and see.

I'd expect that disk can be made to perform reasonably well with qemu,
using dma emulation and async IO.  The old vmware workstation paper on
device virtualization does a pretty good job of demonstrating that
trap and emulate device access sucks, and would seem to imply that
it's pretty unlikely to be practical for high-rate networking.


[1] http://www.usenix.org/event/usenix01/sugerman/sugerman.pdf

Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>