WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Directly mapping vifs to physical devices in netback - an al

To: "Xen Devel" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] Directly mapping vifs to physical devices in netback - an alternative to bridge
From: "Santos, Jose Renato G" <joserenato.santos@xxxxxx>
Date: Wed, 30 Aug 2006 15:11:45 -0500
Cc: Yoshio Turner <yoshiotu@xxxxxxxxxx>, G John Janakiraman <john@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Wed, 30 Aug 2006 13:12:23 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcbMcIQRH4bGnSgURgScetMRwwq9eg==
Thread-topic: Directly mapping vifs to physical devices in netback - an alternative to bridge
We would like to propose an alternative to the linux bridge for network
virtualization in Xen.

We think that the standard linux bridge makes the network configuration
more complex than what is really necessary increasing the chances of
network configuration errors. The bridge itself is an additional entity
that needs to be configured and associated with physical interfaces
(more things to configure, more opportunities for mistakes). Bridge
configuration is not a simple operation: the physical interface is
brought down and up, virtual interfaces are created and associated with
the bridge, virtual and physical interfaces are renamed and so on. This
complexity has created several problems in the past. Many reports of
user mistakes or network script bugs have been posted. Although most
issues have been solved, it seems that some still remain.
 
As an example, we have an unusual network setup and the bridge scripts do
not work perfectly for us.
Our server has 8 interfaces (eth0 to eth7) connected to an isolated test
network (for running network benchmarks) and another interface (eth8)
connecting the machine to the outside world and the default interface
for IP routing. Up to until a few weeks ago (maybe 1-2 months), running
the command "network-bridge start vifnum=0" would not work as expected:
the bridge would be configured with "peth8" (default route) instead of
peth0. In the current version of xen-unstable this seems to be fixed.
However, now the command "network-bridge start netdev=eth0" does not
work properly as it tries to create veth8 (instead of veth0) which does
not exist when the maximum number of loopback devices is 8
(veth0-veth7). This error can be avoided by specifying "vifnum=0", but
still this is annoying and confusing to the user. The claim is that with
a simpler network approach as the one proposed here, both problems in
the network configuration scripts and mistakes in user network
configuration can be significantly reduced.

Here is a brief summary of the proposed alternative scheme:
- Netback keeps a mapping of vifs to physical network devices. Netback
intercepts all packets sent or received on the physical interface and on
the I/O channel and forwards them directly to the appropriate domU, dom0
(local network stack) or physical interface (external host) based on
the packet MAC address (handling broadcast correctly). A new parameter
"pdev" is used in a vif definition in the domain configuration file
to indicate the physical interface associated with the vif. This parameter
is then used by netback to create the appropriate virtual to physical mapping.

Some advantages of the alternative network approach:

 a) Direct association of vifs to physical devices
   A vif is directly associated with a physical device in the domain
configuration file, instead of being associated with a bridge which in
turn is associated with a device. This reduces the likelihood of user
mis-configurations (less things to configure, less opportunities for
mistakes)

 b) No "network-script" required
   Only a very simple "vif-script" based on "vif-common.sh" is needed to
bring a new virtual interface up at the time of domain creation with no
script needed to setup the network configuration when xend starts. Again
simpler configuration which reduces likelihood of potential network
script bugs. Since there is no script for network configuration and the
"vif-script" just has to bring up a virtual interface, the likelihood
of any script bug is very small.

 c) No loopback interfaces used for dom0 communication
   Simpler network configuration with fewer interfaces visible to user.
Current limitation on the number of physical devices imposed by the
number of available loopback interfaces is eliminated. Possibly better
performance for dom0 traffic due to fewer stages for packet handling (needs
to be measured).

 d) No need for bringing a physical interface down and up when
configuring network
   The current bridge setup brings the physical interface down and then
up when configuring the bridge. This is a problem for configurations
that cannot lose network connectivity such as for example a system with
an NFS root filesystem.

 e) Performance
   Previous OProfile results have shown that the default bridge
configuration has significant performance overhead. The proposed netback
switching approach has much lower performance overhead. A more careful
analysis indicated that most of the bridge overhead was caused by the
netfilter code in the bridge. When the netfilter option in the bridge
(CONFIG_BRIDGE_NETFILTER) is disabled, both approaches have similar
performance, with the proposed netback switching approach performing
slightly better. See results summary below.

========================================================================

Performance Results:
  - Machine: 4-way P4 Xeon 2.8 GHz with 4GB of RAM (dom0 with 512 MB and
domU with 256MB)
  - Benchmark: single TCP connection at max rate on a gigabit interface
(940 Mb/s)
 
Measurement: CPU utilization on domain0 (99% confidence interval for 8
meaurements) 
=======================================================================
| Experiment | default bridge  | bridge with        |   netback       |
|            |                 | netfilter disabled |   switching     |
=======================================================================
|  receive   |  85.00% ±0.38%  |   73.97% ±0.23%    |  72.17% ±0.56%  |
|  transmit  |  77.13  ±0.49%  |   68.86% ±0.73%    |  66.34% ±0.52%  |
=======================================================================


We have attached a patch with the the netback direct switching approach.


Comments, suggestions and criticisms are welcome ...

Thanks for your time and for any feedback

Renato

Attachment: netback_vifdevmap_11267.patch
Description: netback_vifdevmap_11267.patch

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>