WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Xen network performance analysis and benchmarks / windows pv

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] Xen network performance analysis and benchmarks / windows pv drivers
From: Pasi Kärkkäinen <pasik@xxxxxx>
Date: Tue, 18 Mar 2008 23:29:36 +0200
Delivery-date: Tue, 18 Mar 2008 14:30:00 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.13 (2006-08-11)
Hello list.

I'm forwarding my mail sent to xen-users list because it contains some good
links for studying the (xen) network performance and issues with it.. 


----- Forwarded message from Pasi Kärkkäinen <pasik@xxxxxx> -----

From: Pasi Kärkkäinen <pasik@xxxxxx>
To: Tom Brown <xensource.com@xxxxxxxxxxxxxxxxxxx>
Cc: Scott McKenzie <scott.xensource@xxxxxxxxxxxxx>,
        James Harper <james.harper@xxxxxxxxxxxxxxxx>,
        xen-users <xen-users@xxxxxxxxxxxxxxxxxxx>,
        Emre ERENOGLU <erenoglu@xxxxxxxxx>
Date: Tue, 18 Mar 2008 23:17:45 +0200
Subject: Re: [Xen-users] Re: [Xen-devel] Release 0.8.5 of GPL PV drivers
        forWindows

On Tue, Mar 18, 2008 at 11:02:56PM +0200, Pasi Kärkkäinen wrote:
> On Tue, Mar 18, 2008 at 12:02:52PM -0700, Tom Brown wrote:
> > >
> > >Isn't these 100 mbps or 1000 mbps speeds funny numbers for today's CPU
> > >power? I mean, somewhere in the design, there's something wrong that forces
> > >us to make possibly too many context switches between DomU, Hypervisor and
> > >Dom0. ???
> > >
> > >Emre
> > 
> > what, something like the 1500 byte maximum transmission unit (MTU) from 
> > back in the days when 10 MILLION bits per second was so insanely fast we 
> > connected everything to the same cable!? (remember 1200 baud modems?) Yes, 
> > there might be some "design" decisions that don't work all that well 
> > today.
> > 
> > AFAIK, XEN can't do oversize (jumbo) frames, that would be a big help for 
> > a lot of things (iSCSI, ATAoE, local network )... but even so, AFAIK it 
> > would only be a relatively small improvement (jumbo frames only going up 
> > to about 8k AFAIK).
> > 
> 
> Afaik Xen itself supports jumbo frames as long as everything in both dom0
> and domU is configured correctly. Do you have more information about the 
> opposite? 
> 
> "Standard" jumbo frames are 9000 bytes.. 
> 
> Something that might be interesting: 
> http://www.vmware.com/pdf/hypervisor_performance.pdf
> 
> Especially the "Netperf" section..
> 
> "VMware ESX Server delivers near native performance for both one- and
> two-client tests. The Xen hypervisor, on the other hand, is extremely slow,
> performing at only 3.6 percent of the native performance."
> 
> "VMware ESX Server does very well, too: the throughput for two-client tests
> goes up 1.9-.2 times compared to the one-client tests. Xen is almost CPU
> saturated for the one-client case, hence it does not get much scaling and
> even slows down for the send case."
> 
> "The Netperf results prove that by using its direct I/O architecture
> together with the paravirtualized vmxnet network driver approach, VMware ESX
> Server can successfully virtualize network I/O intensive datacenter
> applications such as Web servers, file servers, and mail servers. The very
> poor network performance makes the Xen hypervisor less suitable for any such
> applications."
> 
> It seems VMware used Xen 3.0.3 _without_ paravirtualized drivers (using QEMU
> emulated NIC), so that explains the poor result for Xen.. 
> 
> 
> Another test, this time with Xen Enterprise 3.2: 
> http://www.vmware.com/pdf/Multi-NIC_Performance.pdf
> 
> "With one NIC configured, the two hypervisors were each within a fraction of
> one percent of native throughput for both cases. Virtualization overhead had 
> no effect for this
> lightly-loaded configuration."
> 
> "With two NICs, ESX301 had essentially the same throughput as native, but
> XE320 was slower by 10% (send) and 12% (receive), showing the effect of CPU 
> overhead."
> 
> "With three NICs, ESX301 is close to its limit for a uniprocessor virtual
> machine, with a degradation compared to native of 4% for send and 3% for 
> receive. XE320 is able to
> achieve some additional throughput using three NICs instead of two, but the 
> performance degradation
> compared to native is substantial: 30% for send, 34% for receive."
> 
> 
> So using paravirtualized network drivers with Xen should make a huge 
> difference, but
> there still seems to be something to optimize.. to catch up with VMware ESX. 
> 
> 

Replying to myself..

http://xen.org/files/xensummit_4/NetworkIO_Santos.pdf
http://xen.org/files/xensummit_fall07/16_JoseRenatoSantos.pdf

Papers from last fall about Xen network performance (with analysis and
benchmarks) and optimization suggestions.. 

Worth reading. 

So I guess the summary would be that using PV network drivers you should be
able to get near native performance with at least single CPU/NIC guests..
this is already the case with xensource windows pv network drivers. 

In the future with netchannel2 performance should scale much higher (10
gigabit).

So now it's only about figuring out how to make gplpv windows drivers perform as
well as xensource drivers:)

-- Pasi

> And some more bencnmark results by xensource: 
> http://www.citrixxenserver.com/Documents/hypervisor_performance_comparison_1_0_5_with_esx-data.pdf
> 
> Something I noticed about the benchmark configuration:
> 
> "XenEnterprise 3.2 - Windows: Virtual Network adapters: XenSource Xen Tools
> Ethernet Adapter RTL8139 Family PCI Fast Ethernet NIC, Receive Buffer 
> Size=64KB"
> 
> Receive buffer size=64KB.. is that something that needs to be tweaked in the
> drivers for better performance? Or is that just some benchmarking tool
> related setting.. 
> 
> -- Pasi
> 

----- End forwarded message -----

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>
  • [Xen-devel] Xen network performance analysis and benchmarks / windows pv drivers, Pasi Kärkkäinen <=