On Thu, Oct 14, 2010 at 3:42 PM, Jeff Sturm <jeff.sturm@xxxxxxxxxx> wrote:
>> -----Original Message-----
>> From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-users-
>> bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Rudi Ahlers
>> Sent: Thursday, October 14, 2010 7:25 AM
>> To: xen-users
>> Subject: [Xen-users] best practices in using shared storage for XEN
> VirtualMachines
>> and auto-failover?
>>
>> Hi all,
>>
>> Can anyone pleas tell me what would be best practice to use shared
> storage with
>> virtual machines, especially when it involved high availability /
> automated failover
>> between 2 XEN servers?
>
> With 2 servers, I hear good things about DRBD, if you don't want to go
> the SAN route. If you have a SAN make sure it is sufficiently
> redundant--i.e. two (or more) power supplies, redundant Ethernet, spare
> controllers, etc. And of course RAID 10 or similar RAID level to guard
> against single-drive failure.
I am planning on setting up a SAN with a few Gluster / CLVM servers -
just need to decide which one first, but I'm going to attemp high
availability + load balancing + ease-of-upgrade-with-no-downtime. Each
server will run RAID10 (maybe RAID6?)
> Pay close attention to power and networking. With 4 NICs available per
> host, I'd go for a bonded pair for general network traffic, and a
> multipath pair for I/O. Use at least two switches. If you get it right
> you should be able to lose one switch or one power circuit and maintain
> connectivity to your critical hosts.
So would you bond eth0 & eth1, and then eth2 & eth3 together? But then
connect the bonded eth0+1 one one switch, and eth2+3 on another switch
for failover? Or would you have eth0 & eth2 on one switch, and eth1 &
eth3 on the other? Is this actually possible? I presume the 2 switches
should also be connected together (preferably via fiber?) and then
setup Spanning Tree? Or should I seperate the 2 networks,and connect
them indivually to the internet?
>
> In my experience with high availability, the #1 mistake I see is
> overthinking the esoteric failure modes and missing the simple stuff.
> The #2 mistake is inadequate monitoring to detect single device
> failures. I've seen a lot of mistakes that are simple to correct:
>
> - Plugging a bonded Ethernet pair into the same switch.
> - Connecting dual power supplies to the same PDU.
> - Oversubscribing a power circuit. When a power supply fails, power
> draw on the remaining supply will increase--make sure this increase
> doesn't overload and trip a breaker.
> - Ignoring a drive failure until the 2nd drive fails.
>
> You can use any of a variety of clustering tools, like heartbeat, to
> automate the domU failover. Make sure you can't get into split-brain
> mode, where a domU can start on two nodes at once--that would quickly
> corrupt a shared filesystem. With any shared storage configuration,
> node fencing is generally an essential requirement.
>
>> What is the best way to connect a NAS / SAN to these 2 servers for
> this kind of setup
>> to work flawlessly? The NAS can export iSCSI, NFS, SMB, etc. I'm sure
> I could even
>> use ATAOE if needed
>
> For my money I'd go with iSCSI (or AoE), partition my block storage and
> export whole block devices as disk images for the domU guests. If your
> SAN can't easily partition your storage, consider a clustered logical
> volume manager like CLVM on RHCS.
>
> -Jeff
>
I am considering CLVM, or Gluster - just need to play with them and
decide which one I prefer :)
>
>
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users
>
--
Kind Regards
Rudi Ahlers
SoftDux
Website: http://www.SoftDux.com
Technical Blog: http://Blog.SoftDux.com
Office: 087 805 9573
Cell: 082 554 7532
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|