[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Interesting observation with network event notification and batching



On 01/07/13 17:06, Wei Liu wrote:
On Mon, Jul 01, 2013 at 11:59:08PM +0800, annie li wrote:
[...]
1. SKB frag destructor series: to track life cycle of SKB frags. This is
not yet upstreamed.
Are you mentioning this one 
http://old-list-archives.xen.org/archives/html/xen-devel/2011-06/msg01711.html?

<http://old-list-archives.xen.org/archives/html/xen-devel/2011-06/msg01711.html>

Yes. But I believe there's been several versions posted. The link you
have is not the latest version.

2. Mechanism to negotiate max slots frontend can use: mapping requires
backend's MAX_SKB_FRAGS >= frontend's MAX_SKB_FRAGS.

3. Lazy flushing mechanism or persistent grants: ???
I did some test with persistent grants before, it did not show
better performance than grant copy. But I was using the default
params of netperf, and not tried large packet size. Your results
reminds me that maybe persistent grants would get similar results
with larger packet size too.

"No better performance" -- that's because both mechanisms are copying?
However I presume persistent grant can scale better? From an earlier
email last week, I read that copying is done by the guest so that this
mechanism scales much better than hypervisor copying in blk's case.

The original persistent patch does memcpy in both netback and
netfront side. I am thinking maybe the performance can become better
if removing the memcpy from netfront.

I would say that removing copy in netback can scale better.

Moreover, I also have a feeling that we got persistent grant
performance based on default netperf params test, just like wei's
hack which does not get better performance without large packets. So
let me try some test with large packets though.


Sadly enough, I found out today these sort of test seems to be quite
inconsistent. On a Intel 10G Nic the throughput is actually higher
without enforcing iperf / netperf to generate large packets.

When I have made performance measurements using iperf, I found that for a given point in the parameter space (e.g. for a fixed number of guests, interfaces, fixed parameters to iperf, fixed test run duration, etc.) the variation was typically _smaller than_ +/- 1 Gbit/s on a 10G NIC.

I notice that your results don't include any error bars or indication of standard deviation...

With this sort of data (or, really, any data) measuring at least 5 times will help to get an idea of the fluctuations present (i.e. a measure of statistical uncertainty) by quoting a mean +/- standard deviation. Having the standard deviation (or other estimator for the uncertainty in the results) allows us to better determine how significant this difference in results really is.

For example, is the high throughput you quoted (~ 14 Gbit/s) an upward fluctuation, and the low value (~6) a downward fluctuation? Having a mean and standard deviation would allow us to determine just how (in)compatible these values are.

Assuming a Gaussian distribution (and when sampled sufficient times, "everything" tends to a Gaussian) you have an almost 5% chance that a result lies more than 2 standard deviations from the mean (and a 0.3% chance that it lies more than 3 s.d. from the mean!). Results that appear "high" or "low" may, therefore, not be entirely unexpected. Having a measure of the standard deviation provides some basis against which to determine how likely it is that a measured value is just statistical fluctuation, or whether it is a significant result.

Another thing I noticed is that you're running the iperf test for only 5 seconds. I have found in the past that iperf (or, more likely, TCP) takes a while to "ramp up" (even with all parameters fixed e.g. "-l <size> -w <size>") and that tests run for 2 minutes or more (e.g. "-t 120") give much more stable results.

Andrew.



Wei.

Thanks
Annie


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.