This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-users] question to DRBD users/experts

To: James Pifer <jep@xxxxxxxxxxxxxxxx>
Subject: Re: [Xen-users] question to DRBD users/experts
From: Ervin Novak <enovak@xxxxxxxxxxx>
Date: Tue, 24 Aug 2010 22:32:50 +0200
Cc: Xen List <xen-users@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Tue, 24 Aug 2010 13:34:54 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <1282680459.3141.27.camel@xxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <1282680459.3141.27.camel@xxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx

> Our big concern is split brain and how to handle that when it happens.
> If you have a large, shared storage over drbd with VMs running on either
> host, how do you handle a split brain situation from a recovery
> standpoint?
Use the SLE HA extension. With pacemaker and a hardware STONITH
device, you'll be able to fence the dead node.

To prevent data loss on the DRBD volume, you can choose between three
replication modes.

> One idea we had is to run multiple ocfs2/drbd's, one for each VM, and we
> can pick and choose which way to recover in a split brain. That seems
> like it makes it a lot more complex and not sure how successful it would
> even be.
DRBD can be used both as a level under, and on top of LVM2. I think
I'd go with the one LV for a DomU + DRBD on the top of this config.
This way you don't even need cLVM.


Xen-users mailing list