Re: [Xen-users] High availability Xen with bonding
I had very good success after some pulling-hairs.
I run lacp + vlan trunking.
- All device setup (eth, bond, bridges) is done via normal OS config,
because that is more reliable.
- All libvirt stuff is disabled, it just limits Xen's possibilities to
"home user level" by assuming you'd only have one bridge.
(chkconfig XXX off ...)
- No messing with ARP is wanted
- You have switches current enough to do "real" LACP
There's a very good (and I think the only working one) manual in the
Oracle VM wiki at
I myself had followed one manual from redhat, which left off somewhere
in the middle.
It's called "Xen_Networking.pdf" by Mark Nielsen. It's a good intro,
but only covers 50% of a good setup.
a) if you look not just at link aggregation but a VLAN-heavy
environment there might be a point (>128 VLANs) where the number of
virtual bridges might become an issue. Then wait for OpenVswitch to
mature or email xen-devel and ask for the status of "vnetd". (just
b) using ethernet (n ethernet links into bond0) and infiniband (2
infiniband hca ports into bond1) bonding on the same host is more
tricky. it seems the ethernet bonding driver tries to cover infiniband
too. The setup is completely undocumented. It is possible, but when I
tried it just didn't pass any more traffic.
For the setup check the following thread in HP itrc:
c) added speed is only guaranteed for multiple connections. if you do
it via bonding, you need a lacp algorithm in your switch that will
hash based on the ip destination ports, not just mac address or ip
address. current cisco gear can do that. for plain iscsi your path
grouping would decide if you see loadbalancing with multiple iSCSI
Hope you get it to work!
2010/10/16 Bart Coninckx <bart.coninckx@xxxxxxxxxx>:
> On Friday 15 October 2010 13:44:42 Eric van Blokland wrote:
>> Hello everyone,
>> A few days back I decided to give Ethernet port bonding in Xen another try.
>> I've never been able to get it to work properly and after a short search I
>> found the network-bridge-bonding script shipped with CentOS-5 probably
>> wasn't going to solve my problems. Instead of searching for a tailored
> curious to see your progress in this. Up till now I tackled network redundancy
> with multipathing, not with bonding. However, this does not provide added
> speed, though theoretically it should. So I recently decided to switch to
> bonding for the hypervisors in their connections to iSCSI, using rr and
> running over seperate switches, just like you but I'm not at the point of
> installing domU's, so I can't really comment on how and if it works. Will know
> next week though so I will return to this post with my findings...
that should definitely get you increased speed, plus multiple iSCSI
connections via separate subnets / nics is the only way you can get
close to FC reliabilty for lower budget.
'Sie brauchen sich um Ihre Zukunft keine Gedanken zu machen'
Xen-users mailing list