|
|
|
|
|
|
|
|
|
|
xen-devel
[Xen-devel] I/O descriptor ring size bottleneck?
Hi everyone,
I'm doing some networking experiments over high BDP topologies. Right
now the configuration is quite simple -- two Xen boxes connected via a
dummynet router. The dummynet router is set to limit bandwidth to
500Mbps and simulate an RTT of 80ms.
I'm using the following sysctl values:
net.ipv4.tcp_rmem = 4096 87380 4194304
net.ipv4.tcp_wmem = 4096 65536 4194304
net.core.rmem_max = 8388608
net.core.wmem_max = 8388608
net.ipv4.tcp_bic = 0
(tcp westwood and vegas are also turned off for now)
Now if I run 50 netperf flows lasting 80 seconds (1000RTTs) from
inside a VM on one box talking to the netserver on the VM on the
other box, I get a per flow throughput of around ~2.5Mbps (which
sucks, but lets ignore the absolute value for the moment).
If I run the same test, but this time from inside dom0, I get a per
flow throughput of around 6Mbps.
I'm trying to understand the difference in performance. It seems to me
that the I/O descriptor ring sizes are hard coded to 256 -- could that
be a bottleneck here? If not, have people experience similar problems?
TIA
--
Diwaker Gupta
http://resolute.ucsd.edu/diwaker
-------------------------------------------------------
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel
|
<Prev in Thread] |
Current Thread |
[Next in Thread>
|
- [Xen-devel] I/O descriptor ring size bottleneck?,
Diwaker Gupta <=
|
|
|
|
|