This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-devel] Re: VT is comically slow

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] Re: VT is comically slow
From: alex@xxxxxxxxxxxxxxx
Date: Thu, 06 Jul 2006 17:43:50 -0800
Delivery-date: Thu, 06 Jul 2006 18:44:14 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Anthony Liguori wrote:
> ...
> > We (Virtual Iron) are in a process of developing accelerated drivers for
> > the HVM guests.  Our goal for this effort is to get as close to native
> > performance as possible and to make paravirtualization of guests
> > unnecessary.  
> ...
> I don't think paravirtual drivers are necessary for good performance. 
> There are a number of things about QEMU's device emulation that are less
> than ideal.
Before deciding to implement accelerated drivers for many different guest OSes, 
no trivial undertaking, we did quite a lot of analysis of QEMU and its 
Our conclusion was that QEMU in the near future was not going to be able to 
reach performance goals that we set out for our product.  Instead of hacking on 
QEMU in hope of getting better numbers out of it, we decided to design and 
accelerated drivers and the performance numbers we are getting proves that was 
right decision to make.  As I mentioned in my post before, these drvers will be 
available under GPL and everyone is welcome to use them.
> Also, and I suspect this has more to do with your performance numbers,
> QEMU currently does disk IO via read()/write() syscalls on an fd that's
> open()'d without O_DIRECT.  This means everything's going through the page
> cache.
The QEMU code that we use doesn't go through the dom0 buffer cache, we modified 
code to use O_DIRECT.  Can't user buffer cache and accelerated drivers (they go 
to the disk) together, it can cause disk corruption.  The performance numbers 
we get 
from this version of QEMU is still 4 to 6 times slower that native disk I/O.
> I suspect that SCSI + linux-aio would result in close to native
> performance.  Since SCSI is already in QEMU CVS, it's not that far off.
You might be right, however even with pipelining and async I/O, I don't think 
it is going to get close to native I/O numbers.  I guess we'll just have to 
and see.

-Alex V.

Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>