This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


RE: [Xen-users] question to DRBD users/experts

To: "James Pifer" <jep@xxxxxxxxxxxxxxxx>, "Xen List" <xen-users@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-users] question to DRBD users/experts
From: "James Harper" <james.harper@xxxxxxxxxxxxxxxx>
Date: Wed, 25 Aug 2010 09:32:29 +1000
Delivery-date: Tue, 24 Aug 2010 16:33:38 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <1282680459.3141.27.camel@xxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <1282680459.3141.27.camel@xxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: ActDyGebtYqH9yoxSA25CfFKALnP/wAG3G3Q
Thread-topic: [Xen-users] question to DRBD users/experts
> We're looking at using xen on SLES11SP1 servers in production at
> sites. We've been testing ocfs2 over dual primary drbd as one of the
> storage choices. It runs great, and is certainly more cost affective
> than putting SANs at our sites.
> Our big concern is split brain and how to handle that when it happens.
> If you have a large, shared storage over drbd with VMs running on
> host, how do you handle a split brain situation from a recovery
> standpoint?
> One idea we had is to run multiple ocfs2/drbd's, one for each VM, and
> can pick and choose which way to recover in a split brain. That seems
> like it makes it a lot more complex and not sure how successful it
> even be.
> Are others using drbd in production?
> What has been your experience?
> Any suggestions are appreciated. Our company standard is SLES, so we
> have to use tools in that distro.

I'm using DRBD. I was using LVM2 on a multiple-primary DRBD (eg one big
DRBD volume cut into slices with LVM) and when it worked it was fine but
it would split brain occasionally (on startup after a crash normally,
not just spontaneously) and the CLVM daemon would hang on occasion for
no good reason.

Now I'm using DRBD on LVM on RAID0 and only multiple-primary where
necessary. Each DRBD is formed from an LV on each node. Extra work to
create a new DRBD volume (create LV on both nodes then set up the DRBD)
but much less likely to go wrong during normal use - it hasn't gone
wrong yet after months of use!

A better setup though would be a SAN consisting of iSCSI on DRBD in
single primary mode (using HA to handle failover if the primary fails)
and all the hosts using iSCSI. I don't have enough hardware to make that
work though unfortunately.


Xen-users mailing list