WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Re: [Xen-devel] Network performance - sending from VM to

To: Xen User <xen@xxxxxxxxxx>
Subject: Re: [Xen-users] Re: [Xen-devel] Network performance - sending from VM to VM using TCP
From: Kip Macy <kip.macy@xxxxxxxxx>
Date: Thu, 26 May 2005 09:57:04 -0700
Cc: xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Thu, 26 May 2005 16:56:26 +0000
Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=Ni92wFw9OoGCFKz/e8J/RgAwO8T5RR7FSPpJKJgfTCLGcCM69hWmpy6D69zXCnq0iGxjUIHFyGytKZNsS4BakQDVUboZAjw8I3oZ7q1+AvmGUNgwB+7WT+ODoqc0ExGg98p0kkakfloObZVCTnugpmc7DK5e5fvFn/Lfxx1C9ck=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4295F2DE.2080509@xxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <4713f859050525152448a0f609@xxxxxxxxxxxxxx> <4295091D.10505@xxxxxxxxxx> <4713f85905052522286da17fd8@xxxxxxxxxxxxxx> <4295F2DE.2080509@xxxxxxxxxx>
Reply-to: Kip Macy <kip.macy@xxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Bandwidth Delay Product - google can give you better examples than I.

On 5/26/05, Xen User <xen@xxxxxxxxxx> wrote:
> Cherie Cheung wrote:
> > Hi,
> >
> > Thanks for answering me. Here's what I have:
> >
> >
> >>Were you testing with 65536 bytes exactly for some reason?
> >>This is stop and go traffic and normally the kernel doesn't
> >>use the entire buffer to store data - it's roughly half...
> >>
> >>Could you test with different send sizes?
> >
> >
> > No special reason for that. What do you mean by kernel doesn't use the
> > entire buffer to store the data? I have tried different send size. It
> > doesn't make any noticable difference.
> >
> >
> >>If you just want to improve your peformance, increase your
> >>buffer sizes!
> >>
> >>For example:
> >>tcp_rmem = 4096 1398080 8388608
> >>tcp_wmem = 4096 1398080 8388608
> >
> >
> > The performance only improved a little.
> >
> > TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to dw15.ucsd.edu
> > (172.19.222.215) port 0 AF_INET
> > Recv   Send    Send
> > Socket Socket  Message  Elapsed
> > Size   Size    Size     Time     Throughput
> > bytes  bytes   bytes    secs.    10^6bits/sec
> >
> > 1398080 1398080 1398080    80.39      26.55
> >
> > can't compare with that of domain0 to domain0.
> >
> >
> >>Were you seeing losses, queue overflows?
> >
> > how to check that?
> >
> >
> >>More importantly, how much memory do you have in the system and
> >>how were you allocating it?
> >
> > it said 127MB in sudo xm list
> >
> > is it really the problem with the buffer size and send size? domain0
> > can achieve such good performance under the same settings. Is the
> > bottleneck related to the overhead in the VM that causes the problem?
> >
> > also, I had performed some more tests:
> > with bandwidth 150Mbit/s and RTT 40ms
> >
> > domain0 to domain0
> > Recv   Send    Send
> > Socket Socket  Message  Elapsed
> > Size   Size    Size     Time     Throughput
> > bytes  bytes   bytes    secs.    10^6bits/sec
> >
> >  87380  65536  65536    80.17     135.01
> >
> > vm to vm
> > Recv   Send    Send
> > Socket Socket  Message  Elapsed
> > Size   Size    Size     Time     Throughput
> > bytes  bytes   bytes    secs.    10^6bits/sec
> >
> >  87380  65536  65536    80.55     134.80
> >
> > under these setting, VM to VM performed as good as domain0 to domain0.
> > if I increased or decreased the BDP, the performance dropped again.
> 
> Hi Cherie,
> 
> Please pardon my ignorance.  What is BDP?
> 
> TIA
> 
> >
> > Any idea what is causing the problem?
> >
> > Thanks.
> >
> > Cherie
> >
> >
> >
> > On 5/26/05, Nivedita Singhvi <niv@xxxxxxxxxx> wrote:
> >
> >>Cherie Cheung wrote:
> >>
> >>>Hi,
> >>>
> >>>I have been simulating a network using dummynet and evaluating it
> >>
> >>I haven't played with dummynet and don't know if there are
> >>additional issues inherent in using dummynet itself...
> >>
> >>
> >>>using netperf. Xen3.0-unstable is used and the VMs are
> >>>vmlinuz-2.6.11-xenU. The simulated link is 300Mbps with 80ms RTT.
> >>>Using netperf, I sent data using TCP from domain-0 of machine 1 to
> >>>domain-0 of machine 2. Then I repeat the experiment, but this time
> >>>from VM-1 of machine 1 to VM-1 of machine 2.
> >>>
> >>>However, the performance across the two VMs is substantially worse
> >>>than that across domain-0. Here's the result:
> >>>
> >>>FROM VM to VM:
> >>>TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to dw10.ucsd.edu
> >>>(172.19.222.210) port 0 AF_INET
> >>>Recv   Send    Send
> >>>Socket Socket  Message  Elapsed
> >>>Size   Size    Size     Time     Throughput
> >>>bytes  bytes   bytes    secs.    10^6bits/sec
> >>>
> >>> 87380  65536  65536    80.28      24.83
> >>
> >>Your send message size is exactly your socket size. It is also
> >>the size of the default write buffer. The kernel uses half the
> >>buffer (very roughly) for data
> >>
> >>Were you testing with 65536 bytes exactly for some reason?
> >>This is stop and go traffic and normally the kernel doesn't
> >>use the entire buffer to store data - it's roughly half...
> >>
> >>Could you test with different send sizes?
> >>
> >>
> >>>FROM domain-0 to domain-0:
> >>>TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to damp.ucsd.edu
> >>>(137.110.222.236) port 0 AF_INET
> >>>Recv   Send    Send
> >>>Socket Socket  Message  Elapsed
> >>>Size   Size    Size     Time     Throughput
> >>>bytes  bytes   bytes    secs.    10^6bits/sec
> >>>
> >>> 87380  65536  65536    80.11     280.62
> >>>
> >>>Here's the setting of the network buffer:
> >>>
> >>>net.core.wmem_max = 8388608
> >>>net.core.rmem_max = 8388608
> >>>net.ipv4.tcp_bic = 1
> >>>net.ipv4.tcp_rmem = 4096        87380   8388608
> >>>net.ipv4.tcp_wmem = 4096        65536   8388608
> >>>
> >>>Does anyone know why the performance across two VMs is so bad? Any fix
> >>>to it? Thank you.
> >>
> >>If you just want to improve your peformance, increase your
> >>buffer sizes!
> >>
> >>For example:
> >>tcp_rmem = 4096 1398080 8388608
> >>tcp_wmem = 4096 1398080 8388608
> >>
> >>Were you seeing losses, queue overflows?
> >>
> >>More importantly, how much memory do you have in the system and
> >>how were you allocating it?
> >>
> >>
> >>thanks,
> >>Nivedita
> >>
> >
> >
> > _______________________________________________
> > Xen-users mailing list
> > Xen-users@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/xen-users
> >
> >
> 
> 
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users
>

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users