Rik van Riel wrote:
> VT by itself seems fine, but once a VT domain is running a workload that
> is network intensive combined with a disk/cpu intensive workload, things
> get incredibly slow.
> Operations that take less than a second with either workload running
> alone can now take many seconds, sometimes the better part of a minute!
> Is this some limitation of the qemu device model?
We (Virtual Iron) are in a process of developing accelerated drivers for the
HVM guests. Our goal for this effort is to get as close to native performance
as possible and to make paravirtualization of guests unnecessary. The drivers
currently support most flavors of RHEL, SLES and Windows. The early
performance numbers are encouraging. Some numbers are many times faster than
QEMU emulation and are close to native performance numbers (and we are just
beginning to tune the performance).
Just to give people a flavor of the performance that we are getting, here are
some preliminary results on Intel Woodcrest (51xx series), with a Gigabit
network, with SAN storage and all of the VMs were 1 CPU. These numbers are
very early, disks numbers are very good and we are still tuning the network
Bonnie-SAN - bigger is better RHEL-4.0 (32-bit) VI-accel
Write, KB/sec 52,106 49,500
Read, KB/sec 59,392 57,186
netperf - bigger is better RHEL-4.0 (32-bit) VI-accel
tcp req/resp (t/sec) 6,831 5,648
SPECjbb2000 - bigger is better RHEL-4.0 (32-bit) VI-accel
JRockit JVM thruput 43,061 40,364
This code is modeled on Xen backend/frontend architecture concepts and will be
Chief Technology Officer, Founder
Virtual Iron Software, Inc
Xen-devel mailing list