Hi,
Thanks for answering me. Here's what I have:
Were you testing with 65536 bytes exactly for some reason?
This is stop and go traffic and normally the kernel doesn't
use the entire buffer to store data - it's roughly half...
Could you test with different send sizes?
No special reason for that. What do you mean by kernel doesn't use the
entire buffer to store the data? I have tried different send size. It
doesn't make any noticable difference.
If you just want to improve your peformance, increase your
buffer sizes!
For example:
tcp_rmem = 4096 1398080 8388608
tcp_wmem = 4096 1398080 8388608
The performance only improved a little.
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to dw15.ucsd.edu
(172.19.222.215) port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
1398080 1398080 1398080 80.39 26.55
can't compare with that of domain0 to domain0.
Were you seeing losses, queue overflows?
how to check that?
More importantly, how much memory do you have in the system and
how were you allocating it?
it said 127MB in sudo xm list
is it really the problem with the buffer size and send size? domain0
can achieve such good performance under the same settings. Is the
bottleneck related to the overhead in the VM that causes the problem?
also, I had performed some more tests:
with bandwidth 150Mbit/s and RTT 40ms
domain0 to domain0
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
87380 65536 65536 80.17 135.01
vm to vm
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
87380 65536 65536 80.55 134.80
under these setting, VM to VM performed as good as domain0 to domain0.
if I increased or decreased the BDP, the performance dropped again.
Any idea what is causing the problem?
Thanks.
Cherie
On 5/26/05, Nivedita Singhvi <niv@xxxxxxxxxx> wrote:
Cherie Cheung wrote:
Hi,
I have been simulating a network using dummynet and evaluating it
I haven't played with dummynet and don't know if there are
additional issues inherent in using dummynet itself...
using netperf. Xen3.0-unstable is used and the VMs are
vmlinuz-2.6.11-xenU. The simulated link is 300Mbps with 80ms RTT.
Using netperf, I sent data using TCP from domain-0 of machine 1 to
domain-0 of machine 2. Then I repeat the experiment, but this time
from VM-1 of machine 1 to VM-1 of machine 2.
However, the performance across the two VMs is substantially worse
than that across domain-0. Here's the result:
FROM VM to VM:
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to dw10.ucsd.edu
(172.19.222.210) port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
87380 65536 65536 80.28 24.83
Your send message size is exactly your socket size. It is also
the size of the default write buffer. The kernel uses half the
buffer (very roughly) for data
Were you testing with 65536 bytes exactly for some reason?
This is stop and go traffic and normally the kernel doesn't
use the entire buffer to store data - it's roughly half...
Could you test with different send sizes?
FROM domain-0 to domain-0:
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to damp.ucsd.edu
(137.110.222.236) port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
87380 65536 65536 80.11 280.62
Here's the setting of the network buffer:
net.core.wmem_max = 8388608
net.core.rmem_max = 8388608
net.ipv4.tcp_bic = 1
net.ipv4.tcp_rmem = 4096 87380 8388608
net.ipv4.tcp_wmem = 4096 65536 8388608
Does anyone know why the performance across two VMs is so bad? Any fix
to it? Thank you.
If you just want to improve your peformance, increase your
buffer sizes!
For example:
tcp_rmem = 4096 1398080 8388608
tcp_wmem = 4096 1398080 8388608
Were you seeing losses, queue overflows?
More importantly, how much memory do you have in the system and
how were you allocating it?
thanks,
Nivedita
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users