Hi all,
I got very low TCP throughput with iperf and netperf while using TC-TBF (Linux
native TC
token bucket traffic shaping) on the physical Giga NIC.
I attached Linux TC TBF (Linux native Traffic Control token bucket filter)
to the physical NICs on two REHL5 Xen boxes (eth1 on machine ios3 and devmsunf):
tc qdisc add dev eth1 root tbf rate 1000mbit burst 100mb latency 100ms
Three Linux boxes connect to a HP Giga switch. Netperf TCP traffic was sent from the two Xen boxes
( devmsunf and ios3) to a third network client running REHL5 (ios2) from PNIC to PNIC in two separate tests.
ios3 gave a throughput of 940Mbps, while devmsunf gave a throughput of only a few hundred
Kbps. Netperf UDP tests gave 940Mbps from both boxes. Without TC-TBF, both
boxes gave
940Mbps TCP throughput. It seemed that Xen, TCP and TC-TBF is a bad combination.
I don't know why two Xen boxes giving different TCP throughput.
I tried to increase the TCP window (TCP Send Buffer Size) from OS default 16K to 64K,
but no improvement in TCP throughput on devmsunf.
I also have the same problem when I used Xen Dom0 running netserver (netperf receiver).
With default TCP window size, the netperf TCP traffic from a REHL5 CN to the Xen
receiver was unstable, varying from 800Mbps to a total throttling state (a few kbps).
After setting the netserver TCP window size to 64K (default 16K) on the Xen box, the TCP traffic
performed good steady at 870Mbps.
Increasing TCP window via setsockopt() in the netperf/iperf application code
before opening
TCP connection seemed to fix this problem on Xen kernel. However, setting global TCP TCP receiving
and sending buffer via sysctl in /etc/sysctl.conf like the following didn't work.
# increase TCP max buffer size setable using setsockopt()
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
# increase Linux autotuning TCP buffer limits
# min, default, and max number of bytes to use
# set max to at least 4MB, or higher if you use very high BDP paths
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
The iperf/netperf application seemed to pick up the adjusted TCP window, but no improvement
in throughput was observed. I am not sure whether Linux auto-tuning on the send buffer is
overwriting my settings or not. The problem seems to be also related to the TCP round trip time.
FYI, I am using Xen version 3.0.3-rc5-8.el5 on Red Hat Enterprise Linux Server
(2.6.18-8.el5xen).
Has anybody experienced similar problem? Any help is very welcomed.
Thanks in advance!
Jialin
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|