WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] [PATCH] Network Checksum Removal

To: "Jon Mason" <jdmason@xxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-devel] [PATCH] Network Checksum Removal
From: "Ian Pratt" <m+Ian.Pratt@xxxxxxxxxxxx>
Date: Tue, 24 May 2005 00:59:19 +0100
Cc: Andrew Theurer <habanero@xxxxxxxxxx>, bin.ren@xxxxxxxxxxxx
Delivery-date: Mon, 23 May 2005 23:58:43 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcVf4qpTAyMEMwsITjuWMtpC/rBItgAATAzg
Thread-topic: [Xen-devel] [PATCH] Network Checksum Removal
> I get the following domU->dom0 throughput on my system (using 
> netperf3 TCP_STREAM testcase):
> tx on ~1580Mbps
> tx off        ~1230Mbps
> 
> with my previous patch (on Friday's build), I was seeing the 
> following:
> with patch    ~1610Mbps
> no patch              ~1100Mbps
> 
> The slight difference between the two might be caused by the 
> changes that were incorporated in xen between those dates.  
> If you think it is worth the time, I can back port the latest 
> patch to Friday's build to see if that makes a difference.

Are you sure these aren't within 'experimental error'? I can't think of
anything that's changed since Friday that could be effecting this, but
it would be good to dig a bit further as the difference in 'no patch'
results is quite significant. 
It might be revealing to try running some results on the unpatched
Fri/Sat/Sun tree. 

BTW, dom0<->domU is not that interesting as I'd generally discourage
people from running services in dom0. I'd be really interested to see
the following tests:

domU <-> external [dom0 on cpu0; dom1 on cpu1]
domU <-> external [dom0 on cpu0; dom1 on cpu0]
domU <-> domU [dom0 on cpu0; dom1 on cpu1; dom2 on cpu2 ** on a 4 way]
domU <-> domU [dom0 on cpu0; dom1 on cpu0; dom2 on cpu0 ]
domU <-> domU [dom0 on cpu0; dom1 on cpu1; dom2 on cpu1 ]
domU <-> domU [dom0 on cpu0; dom1 on cpu0; dom2 on cpu1 ]
domU <-> domU [dom0 on cpu0; dom1 on cpu1; dom2 on cpu2 ** cpu2
hyperthread w/ cpu 0]
domU <-> domU [dom0 on cpu0; dom1 on cpu1; dom2 on cpu3 ** cpu3
hyperthread w/ cpu 1]

This might help us understand the performance of interdomin networking
rather better than we do at present. If you could fill a few of these in
that would be great.

Best,
Ian

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel