[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] Directly mapping vifs to physical devices in netback -an alternative to bridge



> 
> Performance Results:
>   - Machine: 4-way P4 Xeon 2.8 GHz with 4GB of RAM (dom0 with 512 MB and
> domU with 256MB)
>   - Benchmark: single TCP connection at max rate on a gigabit interface
> (940 Mb/s)
> 
> Measurement: CPU utilization on domain0 (99% confidence interval for 8
> meaurements)
> =======================================================================
> | Experiment | default bridge  | bridge with        |   netback       |
> |            |                 | netfilter disabled |   switching     |
> =======================================================================
> |  receive   |  85.00% ±0.38%  |   73.97% ±0.23%    |  72.17% ±0.56%  |
> |  transmit  |  77.13  ±0.49%  |   68.86% ±0.73%    |  66.34% ±0.52%  |
> =======================================================================

I'm kinda surprised that it doesn't work better than that. We see bridge fns 
show up a lot on oprofile results, so I'd have expected to see more than 1.5% 
benefit. How are you measuring CPU utilization? Are the dom0/domU on different 
CPUs?

Do you get the downgraded bridging performance simply by having 
CONFIG_BRIDGE_NETFILTER=y in the compiled kernel, or do you need to have 
modules loaded or rules installed? Does ebtables have the same effect?

Thanks,
Ian






 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.