WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

[Xen-users] Re: [Linux-HA] Xen EVMS-HA mini-howto and locking mechanism

To: "General Linux-HA mailing list" <linux-ha@xxxxxxxxxxxxxxxxxx>, xen-users@xxxxxxxxxxxxxxxxxxx, evms-cluster@xxxxxxxxxxxxxxxxxxxxx
Subject: [Xen-users] Re: [Linux-HA] Xen EVMS-HA mini-howto and locking mechanism remark
From: "Tijl Van den Broeck" <subspawn@xxxxxxxxx>
Date: Thu, 4 Jan 2007 11:04:32 +0100
Cc: kcorry@xxxxxxxxxxxxxxxxxxxxx, vini.bill@xxxxxxxxx, kathy.robertson@xxxxxxxxx
Delivery-date: Thu, 04 Jan 2007 02:04:30 -0800
Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=GrhLJ1rgTHs61vukM88YDgyWKkpvrCKCkhPZO+qYQN35I3rP8u6qPmIfSOosbq82SjkCFgysLLG4xYFtn/Z9334lnrGDneRPswQwf55R0OQ3CBX+CBAUuEZYNIKMeLP6spHAHx9Uzb0xy0yuy2x+SV5N0SpaPXUYBnr8dZe+YOk=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <d2e5ff780701030605ma638276t5e5a0514e972e774@xxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <5a4fbd6c0701030257i6f1d5c9cu3bf908499e0a51f4@xxxxxxxxxxxxxx> <5a4fbd6c0701030350q478a2f34q7742b4127ab33cd8@xxxxxxxxxxxxxx> <d2e5ff780701030605ma638276t5e5a0514e972e774@xxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Vini,

You have a good point, this should be on a site somewhere. I took some
time to make it a little bit more readable and added a page to the
Xensource wiki:
http://wiki.xensource.com/xenwiki/EVMS-HAwSAN-SLES10

Perhaps it can be linked from the Linux-HA site and from the EVMS
site? In order to keep the documentation centralised on one location.

greetings

Tijl Van den Broeck


On 1/3/07, vini.bill@xxxxxxxxx <vini.bill@xxxxxxxxx> wrote:
Great! I was thinking about implementing somthing like that here at the
company, now I got some guidance.

Thank you very much.

p.s.: Isn't  that worth being the website?

On 1/3/07, Tijl Van den Broeck <subspawn@xxxxxxxxx> wrote:
>
> The following is a mini-howto, skip to the end of this mail for some
> EVMS specific questions/problems.
>
> So far I've not seen anybody yet on the Xen list talking about a
> succesfull setup of EVMS with heartbeat 2. Setting up EVMS-HA is not
> that terribly difficul.
>
> If you're not experienced with heartbeat 2, perhaps read some basic
> material on it at www.linux-ha.org especially on the administration
> tools. Now you've 2 direct options you'd want to use EVMS-HA for:
> 2-node, DRDB-sync + EVMS-HA resource failover (possibly with HA-Xen
> scripts): I haven't tested this yet, but it should be possible as far
> as I've read.
> n-node, iSCSI/FC-SAN + EVMS-HA; my current testing environment.
>
> Notice there's a big difference, afaik when using DRDB you must
> actually failover your resources to the other node. In a SAN based env
> this is not the case as all nodes constantly have full I/O access,
> which is why EVMS-HA should be usefull (at least I thought, read my
> remarks on that at the end of this mail).
>
> I installed plain SLES-10 copies (NOT using EVMS at installation
> time), booted into Xen kernel in which all is configured. My intention
> is/was only to use EVMS for Xen DomU volume management, not for local
> disk management. Just for the ease of administration to keep those
> strictly separated.
>
> So to begin, make sure you've got all I/O access on all nodes to the
> same resources (preferably with a unique name on all nodes for the
> ease of administration, but this isn't necessary as EVMS can fix
> this).
>
> As local disk configuration is not EVMS aware, exclude the local disks
> from EVMS management (in this case cciss mirrored disks, but this
> could well be your hda/sda disks).
> /etc/evms.conf:
> sysfs_devices {
> include = [ * ]
> exclude =  [ cciss* ]
> ...
> }
> Make sure admin_mode is off on all nodes, admin_mode has little to do
> with admininistration but more with recovery/maintenance if things
> have gone bad. More on this in the evms user guide:
> http://evms.sourceforge.net/user_guide/
>
> Setup heartbeat on both clusters to enable EVMS cluster awareness,
> next to the usual ip and node configuration (which can be done using
> yast in SLES10), just adding 2 lines to /etc/ha.d/ha.cf will do:
> respawn root /sbin/evmsd
> apiauth evms uid=hacluster,root
>
> Start the cluster node by node:
> /etc/init.d/hearbeat start
>
> Make sure both sync and come up (keep an eye on /var/log/messages).
> You can use the crmadmin tool to query the state of master and nodes.
> Also usefull is cl_status for checking link & daemon status.
> Note: If you're using bonding, you can run into some trouble here. Use
> unicast for node sync, not multicast as somehow the Xen software
> bridge doesn't fully cope with that yet (at least I didn't get it to
> work, perhaps someone who did?).
>
> When all nodes are up and running, start evmsgui (or evmsn, whichever
> you prefer) on one of the nodes. If you click the settings menu and
> find the option "Node administered" enabled, congratulations you've
> got an cluster aware EVMS). Be sure to know some essentials of EVMS
> (it's a little different from plain LVM2).
>
> Create a Containter with the Cluster Segm Manager, select your
> attached SAN storage objects (could be named sdb, sdc ...), choose
> whichever node name, type shared storage and name the container
> "c_sanstorage" for example.
>
> You can patch through the SAN disks to EVMS volumes (see the disk list
> in available objects). Don't do that, as these volumes will be fixed
> in size (as they were originally presented from the SAN), instead use
> EVMS for storage management. For this, create another container, this
> time a LVM2 Region Manager, in which you store all the objects from
> the CCM c_sanstorage (objects will have a name like c_sanstorage/sdb,
> c_sanstorage/sdc, ...). Choose the extent size at will and name it
> (vgsan for example).
>
> Go into the Region tab, and create a region from the LVM2 freespace
> named and sized at your will, for example domu01stor, domu02stor, ...
>
> Save the configuration, all settings will now be applied and you will
> find your correctly sized+named volumes in /dev/evms/c_sanstorage/
>
> Now partition (if wanted) the evms disks like you used to
> (fdisk),format,place domU's on it and launch.
>
>
>
>
>
> As for the problems & remarks I've seen with this setup.
> For the EVMS configuration to be updated at all nodes, you have to
> select each node in "node administered" and select save for each node
> (as only then the correct devices will be created on the node).
>
> This could be a structural question, but... being cluster aware,
> shouldn't the EVMS-HA combination (with CCM) provide locking on
> volumes created beneath CCM? It is perfectly possible for me to
> corrupt data on an EVMS volume on node 2 which volume is also mounted
> on node 1. I expected some kind of locking to step up:
> dom0_2# mount /dev/evms/c_sanstorage/domu01stor /mnt
> failure: volume domu01stor is already mounted on node dom0_1
>
> Or something amongst those lines. My initial thoughts: it had to do
> with my CCM being "shared". But when creating a CCM as "private", the
> same issues were possible! And even more remarkable, if I create a CCM
> as private on node dom0_1 and I launch evmsgui om dom0_2 it recognizes
> the CCM as private owned by dom0_2 ?!? This strikes me as very odd.
> Are these problems due to faults in my procedure, if so let me know
> please, or are they of a more structural nature (or perhaps SLES10
> specific)?
> They are kind of essential with Xen domains, you wouldn't want to boot
> the same domain twice (one copy on dom0_1 and another running on
> dom0_2) as data corruption is garanteed.
>
> That is why this mail is crossposted at all 3 lists:
> information for xen-users
> technical questions for evms and linux-ha.
>
> greetings
>
> Tijl Van den Broeck
> _______________________________________________
> Linux-HA mailing list
> Linux-HA@xxxxxxxxxxxxxxxxxx
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
>



--
... Vinicius Menezes ...
_______________________________________________
Linux-HA mailing list
Linux-HA@xxxxxxxxxxxxxxxxxx
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>