[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] Network Checksum Removal


  • To: Rolf Neugebauer <rolf.neugebauer@xxxxxxxxx>
  • From: Bin Ren <bin.ren@xxxxxxxxx>
  • Date: Tue, 24 May 2005 01:38:39 +0100
  • Cc: Ian Pratt <m+Ian.Pratt@xxxxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx, Andrew Theurer <habanero@xxxxxxxxxx>, Jon Mason <jdmason@xxxxxxxxxx>
  • Delivery-date: Tue, 24 May 2005 00:38:03 +0000
  • Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=fpHhwnveG8l85MUXe2iqvXyTr0XqJXBib/s1zYG8mW9kz672XdRgDO2NINzSY7vnqzEwhU9qdHuGQ91v9CWUPnJ0W0jGAJfDFvmYqhAY5fu9bXaz6VX9ZpxMZ+EBCG7fhuSR91GPwPz+ueJKeir6+Zo0xN5y0fLJrE2ueIgom0M=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

On 5/24/05, Rolf Neugebauer <rolf.neugebauer@xxxxxxxxx> wrote:
> These results are pretty bad.
> 
> What do you get for dom0->external? That definitely should be close or equal
> to native.

with default BVT, dom->external gets 643Mbps. native gets 744Mbps.

> Have you tweaked /proc/sys/net/core/rmem_max?

No. I once did Linux tcp tuning on native linux and increased the
throughput to around 810Mbps. But it's not very stable and
occasionally produced weird behaviors so I turned off tuning on both
server and client.

> Is the socket buffer set to some large value?

Both sender and receiver buffers are 32K.

> Are you transmitting/receiving enough data?

Each tests last 50 seconds, transmitting around 3g data.

> 
> I don't know netperf but for ttcp I would normally do:
> 
> echo 1048576 > /proc/sys/net/core/rmem_max
> ttcp -b 65536 (or similar) ...
> And then transmit a few gigabytes
> 
> What's the interrupt rate etc.

Haven't noticed yet. I'll get you the number tomorrow.

What currently I'm really really obssessed is (1) dom1->external with
default BVT gives only ~400Mbps (2) dom1->external with my EEVDF
scheduler (everything else is exactly the same) gives 610Mbps, very
close to dom0->external. With scheduler latency histograms, it seems
to be caused by *far too frequent* context switches in BVT. I'm still
digging.

Thanks a lot,
Bin

> 
> Rolf
> 
> 
> On 23/5/05 10:48 pm, "Bin Ren" <bin.ren@xxxxxxxxx> wrote:
> 
> > On 5/23/05, Nivedita Singhvi <niv@xxxxxxxxxx> wrote:
> >> Bin Ren wrote:
> >>> I've added the support for ethtools. By turning on and off netfront
> >>> checksum offloading, I'm getting the following throughput numbers,
> >>> using iperf. Each test was run three times. CPU usages are quite
> >>> similar in two cases ('top' output). Looks like checksum computation
> >>> is not a major overhead in domU networking.
> >>>
> >>> dom0/1/2 all have 128M memory. dom0 has e1000 tx checksum offloading 
> >>> turned
> >>> on.
> >>
> >> Yeah, if you want to do anything network intensive, 128MB is just
> >> not enough - you really need more memory in your system.
> >
> > I've given all the domains 256M memory and switched to netperf
> > TCP_STREAM (netperf -H server). almost no change. Details:
> >
> > dom1->external: 420Mbps
> > dom1->dom0: 437Mbps
> > dom0->dom1: 200Mbps (!!!)
> > dom1->dom2: 327Mbps
> >
> >>
> >>> With Tx checksum on:
> >>>
> >>> dom1->dom2: 300Mb/s (dom0 cpu maxed out by software interrupts)
> >>> dom1->dom0: 459Mb/s (dom0 cpu 80% in SI, dom1 cpu 20% in SI)
> >>> dom1->external: 439Mb/s (over 1Gb/s ethernet) (dom0 cpu 50% in SI,
> >>> dom1 60% in SI)
> >>>
> >>> With Tx checksum off:
> >>>
> >>> dom1->dom2: 301Mb/s
> >>> dom1->dom0: 454Mb/s
> >>> dom1->externel: 437Mb/s (over 1Gb/s ethernet)
> >>
> >>
> >> iperf is a directional send test, correct?
> >> i.e. is dom1 -> dom0 perf the same as dom0 -> dom1 for you?
> >
> > Please see above.
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/xen-devel
> 
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.