[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] dom0 packet drops caused by domU cpu load



-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi together,

last week I played around with XEN 2.0.5 and a Spirent performance
test system.
Installation is as follow:

Spirent -> eth0 <- dom0 -> eth1 <- Spirent

Testflow was a simple unidirectional UDP stream from left to right
with differen paket sizes.
Throughput is measured by Spirent in highest Mbit flow without packet
drops...

Dom0 as gateway with no virtual instances and 256 packet size
performes with 550Mbit.
After starting three virtual instances with light load performance
drops to 320Mbit!


Even worser was the following installation with an virtual instance as
gateway:
Spirent -> eth0 <- dom0 -> vif1.0 <- domU -> vif1.1 <- dom0 ->eth1 <-
Spirent

eth0 and vif1.0 are bridged over xen-br0
eth1 and vif1.1 are bridged over xen-br1

Performance with 256 packet size: 53 MBit

If I add now another virtual instance and kill all processes in this
instance,
performance stays around 50Mbit. However if I start only a simple bash
loop with
"while [ 1 ]; do sleep 3; echo hello world; done"
performance drops to zero!!


I got many rx_fifo_erros / rx_missed_errors (both same values),
rx_errors and rx_dropped on the NIC.
For me it seems Dom0 is unable to receive all packets proper from the
NIC (e1000 is my case) if there is load in the virtual instances...


I also tried xen3-unstable cvs version with overall better performance,
however the massive packet loss is still there...

Any ideas ?

Cheers
 Ulrich
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.1 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFD2jBv22t2oTuElzoRAsMbAJ9BDa6gtgBBGuE38rCepWjY0QvdRACcDJoj
etu0lFEsZ7gblZGwcVsR6pk=
=8w0i
-----END PGP SIGNATURE-----


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.