This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] [FYI] Much difference between netperf results on every r

To: Keir Fraser <Keir.Fraser@xxxxxxxxxxxx>
Subject: Re: [Xen-devel] [FYI] Much difference between netperf results on every run
From: INAKOSHI Hiroya <inakoshi.hiroya@xxxxxxxxxxxxxx>
Date: Fri, 27 Oct 2006 18:49:31 +0900
Cc: Andrew Theurer <habanero@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Fri, 27 Oct 2006 02:51:13 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <C15D8940.2C9E%Keir.Fraser@xxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <C15D8940.2C9E%Keir.Fraser@xxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird (Windows/20060909)
Hi, Andrew and Keir,

sorry for the delayed response.

1/ The updated results shows that there is still a large (about +/-75%) deviation on netperf throughput even with vcpu-pin. 2/ The throughput was rather worse with queue_length=100 (but it was not significant in accordance with t-test.) Its deviation was still large. My expectation was that the throughput would be better with less packet loss and that its deviation would get smaller.

                     Dom-0 to Dom-U  Dom-U to Dom-0
default queue_length 975.06(+/-5.11) 386.04(+/-292.30)
queue_length=100     954.31(+/-3.03) 293.41(+/-180.94)
                                      (unit: Mbps)

The Xen version was unstable C/S 11834. The number of vcpus is one for each Domains. Vcpu-pin is configured so that a logical processor is dedicated to each vcpu.

For comparison, the results for ia64 with the same vcpu-pin configuration are below (Xen version: ia64-unstable C/S 11961.)

                     Dom-0 to Dom-U  Dom-U to Dom-0
default queue_length 279.27(+/-5.78) 1444.7(+/-10.06)
queue_length=100     278.37(+/-3.84) 1498.90(+/-12.02)
                                      (unit: Mbps)


Keir Fraser wrote:
On 19/10/06 7:56 pm, "Andrew Theurer" <habanero@xxxxxxxxxx> wrote:

the throughput measured by netperf differs from time to time. The
changeset was xen-unstable.hg C/S 11760. This is observed when I
executed a netperf on DomU connecting to a netserver on Dom0 in the
same box. The observed throughput was between 185Mbps to 3854Mbps. I
have never seen such a difference on ia64.
I am also seeing this, but with not as much variability. Actually I am
seeing significantly less throughput (1/6th) for dom0->domU then
domU->dom0, and for dom0->domU, I am seeing +/- 15% variability. I am
looking in to it, but so far I have not discovered anything.

So you know what cpus the domU and dom0 are using? You might try pinning the
domains to cpus and see what happens.

Current suspicion is packet loss due to insufficient receive buffers. Try
specifying module option "netback.queue_length=100" in domain 0. We're
working on a proper fix for 3.0.3-1.

 -- Keir

Xen-devel mailing list

Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>