|
|
|
|
|
|
|
|
|
|
xen-devel
RE: [Xen-devel] [PATCH] Network Checksum Removal
> I get the following domU->dom0 throughput on my system (using
> netperf3 TCP_STREAM testcase):
> tx on ~1580Mbps
> tx off ~1230Mbps
>
> with my previous patch (on Friday's build), I was seeing the
> following:
> with patch ~1610Mbps
> no patch ~1100Mbps
>
> The slight difference between the two might be caused by the
> changes that were incorporated in xen between those dates.
> If you think it is worth the time, I can back port the latest
> patch to Friday's build to see if that makes a difference.
Are you sure these aren't within 'experimental error'? I can't think of
anything that's changed since Friday that could be effecting this, but
it would be good to dig a bit further as the difference in 'no patch'
results is quite significant.
It might be revealing to try running some results on the unpatched
Fri/Sat/Sun tree.
BTW, dom0<->domU is not that interesting as I'd generally discourage
people from running services in dom0. I'd be really interested to see
the following tests:
domU <-> external [dom0 on cpu0; dom1 on cpu1]
domU <-> external [dom0 on cpu0; dom1 on cpu0]
domU <-> domU [dom0 on cpu0; dom1 on cpu1; dom2 on cpu2 ** on a 4 way]
domU <-> domU [dom0 on cpu0; dom1 on cpu0; dom2 on cpu0 ]
domU <-> domU [dom0 on cpu0; dom1 on cpu1; dom2 on cpu1 ]
domU <-> domU [dom0 on cpu0; dom1 on cpu0; dom2 on cpu1 ]
domU <-> domU [dom0 on cpu0; dom1 on cpu1; dom2 on cpu2 ** cpu2
hyperthread w/ cpu 0]
domU <-> domU [dom0 on cpu0; dom1 on cpu1; dom2 on cpu3 ** cpu3
hyperthread w/ cpu 1]
This might help us understand the performance of interdomin networking
rather better than we do at present. If you could fill a few of these in
that would be great.
Best,
Ian
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|