WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: VT is comically slow

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] Re: VT is comically slow
From: Anthony Liguori <anthony@xxxxxxxxxxxxx>
Date: Thu, 06 Jul 2006 15:59:38 -0500
Delivery-date: Thu, 06 Jul 2006 15:25:36 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <20060706191618.3CCF02F9B5@xxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Pan/0.14.2.91 (As She Crawled Across the Table (Debian GNU/Linux))
On Thu, 06 Jul 2006 11:16:18 -0800, alex wrote:

> We (Virtual Iron) are in a process of developing accelerated drivers for
> the HVM guests.  Our goal for this effort is to get as close to native
> performance as possible and to make paravirtualization of guests
> unnecessary.  The drivers currently support most flavors of RHEL, SLES and
> Windows.  The early performance numbers are encouraging.  Some numbers are
> many times faster than QEMU emulation and are close to native performance
> numbers (and we are just beginning to tune the performance).

I don't think paravirtual drivers are necessary for good performance. 
There are a number of things about QEMU's device emulation that are less
than ideal.

Namely, QEMU's disk emulation is IDE w/DMA.  Apparently, DMA is busted
right now but even if it worked, IDE only allows one outstanding request
at a time (which doesn't let the host scheduler do it's thing properly). 
Emulating either SCSI (which is in QEMU CVS) or SATA would allow multiple
outstanding requests which would be a big benefit.

Also, and I suspect this has more to do with your performance numbers,
QEMU currently does disk IO via read()/write() syscalls on an fd that's
open()'d without O_DIRECT.  This means everything's going through the page
cache.

I suspect that SCSI + linux-aio would result in close to native
performance.  Since SCSI is already in QEMU CVS, it's not that far off.

I think that the same applies to network IO but I'm not qualified to
comment on what things are important there.

Regards,

Anthony Liguori

> Just to give people a flavor of the performance that we are getting,
> here are some preliminary results on Intel Woodcrest (51xx series), with
> a Gigabit network, with SAN storage and all of the VMs were 1 CPU. These
> numbers are very early, disks numbers are very good and we are still
> tuning the network numbers.
> 
> Bonnie-SAN - bigger is better        RHEL-4.0 (32-bit)   VI-accel
> RHEL-4.0(32-bit) Write, KB/sec                          52,106
>     49,500 Read, KB/sec                           59,392
> 57,186
> 
> netperf - bigger is better           RHEL-4.0 (32-bit)   VI-accel
> RHEL-4.0(32-bit) tcp req/resp (t/sec)                6,831
>  5,648
> 
> SPECjbb2000 - bigger is better       RHEL-4.0 (32-bit)   VI-accel
> RHEL-4.0(32-bit) JRockit JVM thruput                    43,061
>     40,364
> 
> This code is modeled on Xen backend/frontend architecture concepts and
> will be GPLed.
>  
> -Alex V.
> 
> Alex Vasilevsky
> Chief Technology Officer, Founder
> Virtual Iron Software, Inc



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel