This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-devel] Very slow domU network performance - Moved to xen-devel

Winston Chang wrote:
I ran the test with the latest xen-unstable build. The results are the same. When I ran 'xm sched-sedf 0 0 0 0 1 1' to prevent domU CPU starvation, network performance was good. The numbers in this case are the same as in my other message where I detail the results using the week-old xen build -- it could handle 90Mb/s with no datagram loss. So it looks like the checksum patches had no effect on this phenomenon; the only thing that mattered was the scheduling.

What was the previous weight of domain 0? What is the weight assigned to the domU's and do the domU's have bursting enabled?

I'm not really sure the answer to either of these questions. The weight is whatever is the default is with Fedora Core 5 and xen-unstable. I don't know anything about bursting. How do you find out?

I'd like to be corrected if I am wrong, but the last number (weight) is set to 0 for all domains by default. By giving it a value of 1 you are giving dom0 more CPU. The second to last number is a boolean that decides whether a domain is hard locked to it's weight or if can burst using idle CPU cycles. The 3 before that are generally set to 0 and the first number is the domain name. I do not know of a way to grab the weights personally. It is documented in the Xen distribution tgz.

I ran my own tests. I have dom0 with a weight of 512 (double it's memory allocation) and each VM also has a weight equal to it's memory allocation. My dom0 can transfer at 10MB/s+ over the LAN, but domU's with 100% CPU used on the host could only transfer over the LAN at a peak of 800KB/s. When I gave dom0 a weight of 1 domU transfers decreased to a peak of 100KB/s over the "LAN" (quoted because due to proxy ARP the host acts as a router)

The problem does not matter if you use bridged or routed mode.

I would have to believe the problem is in the hypervisor itself and scheduling and CPU usage greatly affect it. Network bandwidth should not be affected unless wanted (ie. by using the rate vif parameter).

Stephen Soltesz has experienced the same problem and has some graphs to back it up. Stephen, will you share at least that one CPU + IPerf graph with the community and perhaps elaborate on your weight configuration (if any).

Thank you,
Matt Ayres

Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>