[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Poor network performance - caused by inadequate vif configuration?

  • To: <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: "Schmidt, Werner (Werner)" <wernerschmidt@xxxxxxxxx>
  • Date: Tue, 29 May 2007 09:45:27 +0200
  • Delivery-date: Tue, 29 May 2007 00:44:28 -0700
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AcehxVLTLIzdE4JISQOD49CswaLTNQ==
  • Thread-topic: Poor network performance - caused by inadequate vif configuration?



Please find below a copy of an email I had sent ‘xen-users’ because I think it might be interesting for this list, too.



similar to some mail threads found in this forum and some other xen-related threads, I had problems with the network performance of my test system:


software base of dom0/domU: RHEL5 (Xen 3.0.3, Redhat 2.6.18-8el5xen SMP kernel)

ibm x306 servers with 3Ghz P4 /MT support; coupled via Gigabit Ethernet switch

Broadcom network interfaces used (the IBM servers I used also have Intel based network interfaces)

standard xen bridging network configuration

test tool: iperf

Xen domUs working in PV mode (the P4 does not support VT)



the data transfer rates with ‘iperf’ were as follows:


- dom0/machine 1  => dom0/machine 2 ~800MBit/s

- domU/machine 1  => dom0/machine 2 ~700MBit/s

- dom0/machine 1  => domU/machine 2 ~ 40MBit/s



This flaw of the last test case and the difference between test case 2 and 3 remained more or less constant with various configs of the test systems:


- credit or  sedf scheduler

- various configs of the schedulers

- copy mode and flipping mode of netfront driver


A detailed analysis with tcpdump/wireshark showed that there must be some losses of data within the TCP stream, resulting  in TCP retransmissions and therefore breaks within data transfer (in one test case I saw a transmission gap of 200 ms caused by TCP retransmissions every 230 ms - explaining the breakdown of the data rate).


Now, looking for the reason for the data losses (this was the reason why I checked the copy mode of the netfront driver) I noticed that the txqueuelen parameter of the vif devices connecting the bridge to the domUs were set to ‘32’ (no idea where and for what reason this value is configured initially - note that the txqueuelen value for Ethernet devices is set to 1000 ).


After changing this parameter to higher values (128-512) I got a much higher performance in test case 3: tcp throughput now reaches values of 700MBit/s and higher; using iperf –d option (tcp data streams in both directions) now gave sum values of more than 900 MBit/s.


I’ll evaluate also the other test cases parameter settings to find out the best setting of the parameters, but I think a suited configuration of the txqueuelen parameter of the vif interfaces will be most important for getting a good network performance for a configuration as described above (comparable to other virtualization solutions)


Some additional test results:


DomU/machine1 – domU/machine1: unidirectional > 1GBit/s; bidirectional: 0.9 – 1 GBit/s in sum





Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.