This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-users] Debian Etch Xen Cluster (DRBD, GNBD, OCFS2, iSCSI, Heart

To: Goswin von Brederlow <brederlo@xxxxxxxxxxxxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-users] Debian Etch Xen Cluster (DRBD, GNBD, OCFS2, iSCSI, Heartbeat?)
From: Nico Kadel-Garcia <nkadel@xxxxxxxxx>
Date: Sun, 23 Sep 2007 09:52:42 +0100
Cc: Dominik Klein <dk@xxxxxxxxxxxxxxxx>, xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Sun, 23 Sep 2007 01:47:23 -0700
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=beta; h=domainkey-signature:received:received:message-id:date:from:user-agent:mime-version:to:cc:subject:references:in-reply-to:content-type:content-transfer-encoding; bh=V+cpFRv1G7imyHYHX0eqfvJXC2GKYIDho1Y82OlMr5g=; b=L7n0fjmCftvmKNXwkN98LlmH2Vw8KlWOVZ31NpJwF13ij3Cglus4rYglu/p8GDFDWhZjV2q8vYXsRirzb0P45OakCRlHjDxYVEJGaXEHvlXWwaP8w7GFvo/fPaxNoR/UTJarzGaIvTcWrjgnED84CTyeWjknI27fd7scgVAc32M=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:message-id:date:from:user-agent:mime-version:to:cc:subject:references:in-reply-to:content-type:content-transfer-encoding; b=EjzISid7PZsspMlfpHm+PpID3g2bTYQ3dgDd1028+d80/Wvzz8plF4IOFItlkcmKRsfeTpJ043ab496hLMgXpU2RD6/g6oYFfTGVW2RriHIwOpVhYO3Xbex2gpxrgzkTgbhgne3Ow23Svt6Q5aQLzcMHNFIhDj6KdcuB/YfuSO0=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <874phl7sv9.fsf@xxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <46E6A47B.3010309@xxxxxxxx> <46E92EB0.3040209@xxxxxxxxxxxxxxxx> <AEC6C66638C05B468B556EA548C1A77D01249662@trantor> <38196.> <46EA2ACF.6050807@xxxxxxxxxxxxxxxx> <874phl7sv9.fsf@xxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird (Windows/20070809)
Goswin von Brederlow wrote:
Dominik Klein <dk@xxxxxxxxxxxxxxxx> writes:

Mehdi AMINI schrieb:
Just remember, if something goes wrong in such a way that the domain is
active on both nodes at the same time with read/write access to the
filesystem, you *will* *destroy* the filesystem and will need to restore
from backup. No amount of fscking will help you.
This is precisely the goal of OCFS, each node can mount a block device
read/write at the same time :)
But still, you don't want to have two servers write to the root
filesystem simultaneusly, do you?

2 Things to think about:

1) Do what you would do without xen. Throw the power switch.

That means you have to teach stonit or heartbeat about xen and the
power switch becomes "xm destroy" on the other node instead of a real
power switch.

2) Would it make sense to have a bit in the lvm headers to show that a
volume is active and have cluster lvm respect that bit and prevent a
second activation if run wihtout force?

Maybe instead of a bit an UUID of the activator would be best. If you
can get a UUID that differs between the physical machines running the

Hmm. You know, you can rename an LVM partition while Xen has it mounted. This might be a useful trick to prevent another Xen guest from mounting it.

Xen-users mailing list