|
|
|
|
|
|
|
|
|
|
xen-bugs
[Xen-bugs] [Bug 1597] New: netfront netback domU packet routing problems
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1597
Summary: netfront netback domU packet routing problems (and
workaround)
Product: Xen
Version: unstable
Platform: Unspecified
OS/Version: Linux
Status: NEW
Severity: normal
Priority: P2
Component: 2.6.18 domU
AssignedTo: xen-bugs@xxxxxxxxxxxxxxxxxxx
ReportedBy: fmatthew5876@xxxxxxxxx
When trying to put network backend and frontend devices inside domains and
routing packets between them packets will mysteriously dissapear.
To explain the issue here is our setup, we have 3 domains A, B, C
[ domU A [eth0 (netfront)]] --> [ domU B [vifA.0 (netbk) - br0 (bridge) - eth0
(netfront)] ] --> [ domU C [vifB.0 ]]
domain A has a netfront device with backend=B, and B has a frontend device with
backend=C. The frontend and backends in B are connected via a bridge.
The frontend in A, bridge in B, and backend in C are all given ip addresses.
B can ping A and C. A can ping B, and C can ping B.
The problem arises when A tries to ping C or C tries to ping A.
running tcpdump I can see the following behavior.
When A tries to ping C the ping requests arrive in B and but then disappears
and never make it to C.
When C tries to ping A, the ping request gets to A. A then sends the ping reply
which crosses over B and then again dissapears and never makes it to C.
Even stranger is the fact that arp packets seem to be able to travel anywhere
across the network just fine. I also tried ssh and verified its not just a
problem with icmp.
Again, this only occurs when trying to go from A to C or C to A. If you send a
packet from or to B it gets delivered just fine.
These same problems occur if instead of a bridge in B you setup ip_forwarding
and use routes.
I added some printk's to the network driver code and it appears that the
backend interface on C is actually receiving the event notifications for the
packets from B but somehow not receiving the packets.
It seems that this issue may be related the netloop driver. 2 workarounds were
discovered.
WORKAROUND 1:
The first workaround is to use the lazy copying feature of the netback driver.
http://lists.xensource.com/archives/html/xen-devel/2007-03/msg00914.html
You have to use NETBK_ALWAYS_COPY_SKB. Enabling lazy copying using the
copy_skb=1 argument to netback will enable NETBK_DELAYED_COPY_SKB mode which
will not work.
One must edit drivers/xen/netback/netback.c and hardcode
netbk_copy_skb_mode = NETBK_ALWAYS_COPY_SKB;
WORKAROUND 2:
The second workaround is to use the netloop driver. You can tell the domU
kernel
to create a netloop device by passing netloop.nloopbacks=1.
You will then need to add the backend device in domain B to a bridge with the
netloop veth0 frontend device. Similar to how dom0 sets up it's networking.
--
Configure bugmail:
http://bugzilla.xensource.com/bugzilla/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug, or are watching the assignee.
_______________________________________________
Xen-bugs mailing list
Xen-bugs@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-bugs
|
<Prev in Thread] |
Current Thread |
[Next in Thread> |
- [Xen-bugs] [Bug 1597] New: netfront netback domU packet routing problems (and workaround),
bugzilla-daemon <=
|
|
|
|
|