[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] RE: [Xen-users] Network performance - sending from VM to VM using TCP


  • To: bin.ren@xxxxxxxxxxxx
  • From: Cherie Cheung <ccyxen@xxxxxxxxx>
  • Date: Sat, 28 May 2005 06:56:44 +0800
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, xen-users@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Fri, 27 May 2005 22:56:04 +0000
  • Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=SJ1AdQkM97mczwrUV2y72jsnmc5r7o6Chtr3hoTGVxm8V27Xw+AT205aGd3xgs0ySgbP1v7DzIDMSz0Z6lxNpia8w/lm6REtmK6c1TXut+zHbt7IFWfqj5UZSWaft+MCVGqM+oPj/WEFrR5jMfW/3HlAsjIO0Kt/4W0sraQZyS4=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

Bin,

Thank you so much. I'll test that out to try to obtain these results.

Cherie

On 5/28/05, Bin Ren <bin.ren@xxxxxxxxx> wrote:
> Cherie:
> 
> I've tried to repeat the testing and here are the results:
> 
> Basic set up: xen machine runs latest xen-unstable and Debian sarge;
> server runs latest Gentoo linux (native). both have intel e1000 mt
> NICs and connect directly throught a 1Gbps switch.
> 
> (1) AFAIK, dummynet is for FreeBSD only, so I use the Linux kernel
> network emulator module
> (http://developer.osdl.org/shemminger/netem/index.html) and sets the
> delay of server eth0 to 10ms using command 'tc qdisc add dev eth0 root
> netem delay 10ms'.
> 
> (2) With linux kernel default networking settings, (i.e. no tcp
> tuning): netperf -H server -l 30:
> 
> without delay, without tuning
> dom0->server: 665Mbps
> dom1->server: 490Mbps
> 
> with 10ms delay, without tuning
> dom0->server: 82Mbps
> dom1->server: 73Mbps
> 
> Note that *both* dom0 and dom1 show significant throughput drops. This
> is different from what you've seen.
> 
> (3) Add linux tcp tuning
> (http://www-didc.lbl.gov/TCP-tuning/linux.html), netperf -H server -l
> 30:
> 
> without delay, with tuning
> dom0->server: 654Mbps
> dom1->server: 488Mbps
> 
> with 10ms delay, with tuning
> dom0->server: 610Mbps
> dom1->server: 480Mbps
> 
> Note: without delay, tuning doesn't provide gains in throughputs.
> however, with delay, both dom0 and dom1 see only *slight* drop in
> throughputs. This makes sense as linux tcp/ip stack needs tuning for
> very long-fat pipes. In your case, 300Mbps + 80ms seems to emulate
> transcontinental links. Still, YMMV.
> 
> - Bin
> 
> On 5/27/05, Ian Pratt <m+Ian.Pratt@xxxxxxxxxxxx> wrote:
> >  > I have been simulating a network using dummynet and
> > > evaluating it using netperf. Xen3.0-unstable is used and the
> > > VMs are vmlinuz-2.6.11-xenU. The simulated link is 300Mbps
> > > with 80ms RTT.
> > > Using netperf, I sent data using TCP from domain-0 of machine
> > > 1 to domain-0 of machine 2. Then I repeat the experiment, but
> > > this time from VM-1 of machine 1 to VM-1 of machine 2.
> > >
> > > However, the performance across the two VMs is substantially
> > > worse than that across domain-0. Here's the result:
> >
> > Someone else was having problems with low performance via dummynet a
> > couple of months back. It's presumably dummynet's packet scheduling
> > causing some bad interaction with the batch processing of packets in
> > netfront/back.
> >
> > The first step to understanding this is probably to capture a tcpdump
> > and look at it with tcptrace to see what's happening with window sizes
> > and scheduling of packets.
> >
> > Ian
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/xen-devel
> >
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.