WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] HVM PV unmodified driver performance

To: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] HVM PV unmodified driver performance
From: Daire Byrne <Daire.Byrne@xxxxxxxxxxxxxxxxxx>
Date: Thu, 8 Mar 2007 12:08:43 +0000 (GMT)
Delivery-date: Thu, 08 Mar 2007 04:05:11 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <1140805754.71821173200611370.JavaMail.root@xxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Hi, 

(posted on xen-users but maybe this list is more appropriate?)

I have been testing the unmodified_drivers from xen-unstable on my FC6 machine 
and I have a couple of questions regarding the results. It seems that I only 
get accelerated network performance in one direction namely sends from the HVM 
guest. I used iperf to benchmark performance between the HVM guest and the FC6 
Dom0:

HVM - No PV drivers
Sends:
  [ ID] Interval       Transfer     Bandwidth
  [  3]  0.0-10.0 sec  54.9 MBytes  46.0 Mbits/sec
Receives:
  [ ID] Interval       Transfer     Bandwidth
  [  4]  0.0- 8.2 sec  17.8 MBytes  18.3 Mbits/sec

HVM - with PV net driver
Sends:
  [ ID] Interval       Transfer     Bandwidth
  [  4]  0.0-10.0 sec   788 MBytes   660 Mbits/sec
Receives:
  [ ID] Interval       Transfer     Bandwidth
  [  3]  0.0-10.0 sec  8.52 MBytes  7.13 Mbits/sec

As you can see the PV driver improves network performance when sending from the 
HVM guest (FC6 - 2.6.18) but if anything the receive/read performance is worse 
than when using the ioemu rtl1839 driver. Is this expected behaviour? Does it 
matter that I'm running xen-3.0.3 but using the xen-unstable umodified_drivers 
source? xen-unstable has support for building against the 2.6.18 kernel whereas 
3.0.3 does not. Is this message on start-up normal?: "netfront: device eth1 has 
copying receive path". From what I've read the PV drivers for Linux should 
accelerate performance in both directions....

Here's my vif config line:
  vif = [ 'bridge=xenbr0' , 'type=ioemu, bridge=xenbr0' ]

I boot a "diskless" FC6 image from the network using pxe (etherboot for the 
rtl1839) and then load the unmodifed_drivers modules and bring up the network 
on eth1 (eth0 being the ioemu rtl1839). Am I doing anything wrong or is this 
performance expected behaviour? 

Also I tried building the unmodified_drivers against both 32bit and 64bit guest 
FC6 kernels/images - they work fine with 64bit Dom0 & 64bit HVM guests but with 
a 64bit Dom0 and 32bit HVM guest the "xenbus.ko" module hangs on the insmod - 
another known issue/limitation?

Any help or hints greatly appreciated!

Regards,

Daire

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>
  • [Xen-devel] HVM PV unmodified driver performance, Daire Byrne <=