WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Very slow domU network performance

To: Pasi Kärkkäinen <pasik@xxxxxx>
Subject: Re: [Xen-users] Very slow domU network performance
From: Winston Chang <winston@xxxxxxxxxx>
Date: Tue, 4 Apr 2006 14:03:59 -0400
Cc: xen-users <xen-users@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Tue, 04 Apr 2006 11:04:39 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <20060404085117.GC16667@xxxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <0EDDFD7D-2C5D-4D47-880D-E7DC268EA149@xxxxxxxxxx> <20060404085117.GC16667@xxxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
On Apr 4, 2006, at 4:51 AM, Pasi Kärkkäinen wrote:

I also see (UDP) packet loss from external box to domU, and from domU to external box. TCP performance is poor because of this packet loss (TCP automatically
re-transmits the broken packets - this causes slow tcp speeds).

I haven't tried latest unstable version of xen.. so I don't know if it's
already fixed.

- Pasi


My problem seems to be different from yours -- UDP performance was fine from the external to domU, but TCP was bad. Also, UDP was broken from dom0 to domU (but fine in reverse), but TCP was OK. I just tested with xen-unstable from a week ago, and it had the same performance.

I think the reason for the performance problem is domU CPU starvation. I found elsewhere that running 'xm sched-sedf 0 0 0 0 1 1' will prevent domU from getting deprived of CPU when dom0 is active, so I ran it, and got the following (with the week-old xen- unstable kernel). The 'old' column is what I had before. The 'new' column is what I got after doing the scheduling change:

====TCP====
./iperf -s
./iperf -c [server]
===========
Transfer rate, in Mb/s
              Old   New
Source Dest   Rate  Rate
dom0   domU    92    167
dom0   iBook   89     91
domU   dom0    85    373
domU   iBook   87     85
iBook  dom0    87     92
iBook  domU     1.9   92


====UDP, 90Mb/s====
./iperf -s -u -i 5
./iperf -c [server] -u -b 90M -t 5
===========
Packet loss, in percent
               Old    New
Source Dest    Loss   Loss
dom0   domU   ~100     0.13
dom0   iBook     4.7   4
domU   dom0      0.1   0
domU   iBook    11     8
iBook  dom0      0.3   0
iBook  domU      1.6   0

All tests were done with iperf 1.7.0 (the new version 2.0.2, wouldn't compile on my iBook).

These numbers are much more reasonable. The big asymmetries are gone. dom0<-->domU TCP performance (170-370 Mb/s) is still significantly lower than domU<->domU performance (1.7Gb/s). This is fast enough for me, but does it indicate a problem with Xen? The machine is a 1.8GHz P4, so I wouldn't think that the xen networking and bridging overhead would reduce performance by so much. Would it?

At any rate, I'd be curious to see if anyone else sees the same slowness when networking, and if the scheduling change fixes it. iperf is very easy to compile and run...

--Winston




Using iperf, I get these approximate numbers  (the left column is the
iperf client and the right column is the iperf server):
domU --> domU  1.77 Gbits/sec (using 127.0.0.1)
domU --> domU  1.85 Gbits/sec (using domU eth0 IP address)
dom0 --> domU  91.5 Mbits/sec
domU --> dom0  85.2 Mbits/sec

So far, so good.  The relatively slow dom0<->domU communication may
indicate a problem, but it's fine for my purposes.  The real problem
is when I use my iBook (running Mac OS X) to run some iperf tests.
The computers are connected via a crossover cable.  They were
originally connected with a hub, but I changed to a crossover cable
connection to reduce variables (it turns out this had no effect).

dom0 --> iBook  89.0 Mbits/sec
iBook --> dom0  86.9 Mbits/sec
domU --> iBook 87.1 Mbits/sec
iBook --> domU 1.87 Mbits/sec

The last entry has me baffled.  Why would it be so incredibly slow in
one direction but not the other?


I decided to run some UDP tests as well.
server: iperf -s -u -i 1
client: iperf -c server_ip -u -b 90M -t 5
The packet loss is as follows:
domU --> domU  0% (using 127.0.0.1)
domU --> domU  0% (using domU eth0 IP address)
dom0 --> domU  ~100% (only 7 of 38464 made it!)
domU --> dom0  0.09%

dom0 --> iBook  4.7%
iBook --> dom0  0.33%
domU --> iBook  11%
iBook --> domU  1.6%

There are some odd things here.  First, dom0->domU with UDP loses
almost everything, but the reverse direction is fine.  Somehow, dom0-
domU TCP was OK (if you consider ~90Mbps OK).
The second weird thing is that in contrast with TCP, UDP works fine
in both directions between the iBook and domU.  There's 11% packet
loss in one case, but that's not a lot -- it's probably just a little
more than poor little iBook can handle.

My dom0 is Fedora Core 5, with the included xen0 kernel.  The domU is
a very basic install of Centos 4.3, based on a jailtime.org image,
running the xensource 2.6.12.6-xenU kernel.  The domU has bridged
networking and 64MB of RAM (I ran the iBook->domU TCP test with 196MB
of RAM but it was still ~2Mb/s).  Firewalling is off in domU and
dom0; the only iptables rules are the ones created by the xen
bridging script.


Has anyone else seen anything like this, or have any idea what's
going on?  This seems bizarre to me.
Thanks for any help.
--Winston


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users