WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-api

Re: [Xen-API] One resource pool and local lvm SR mirrored using DRBD

To: xen-api@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-API] One resource pool and local lvm SR mirrored using DRBD
From: Jakob Praher <jakob@xxxxxxxxxxx>
Date: Mon, 15 Aug 2011 18:25:58 +0200
Delivery-date: Mon, 15 Aug 2011 09:27:07 -0700
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/simple; d=praher.info; h= content-transfer-encoding:content-type:in-reply-to:references :subject:to:mime-version:user-agent:from:date:message-id :x-virus-scanned; s=dkim; t=1313425560; x=1314289560; bh=GJWYdi5 +SKTpBv1E2gbxu1i8Oh5+BZudeeAbIAsOHhs=; b=iBPy9TrnEy2AydmM3+tkJl5 KJYFC2QMj0xYZ/2duY5AYmvKUHnkjw2abGLXD65xUaGtFVgWxpwU7xd3E/V2bOFC MVQ6ionc6gB0g4xe2CTdKGDwbAnabtoLFkVBdo+yGXIfAfmiLNPxA68GYo2Neh0v 6EF6DZ+zUneTHO9/c12U=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <1313409429.1945.32.camel@mabase>
List-help: <mailto:xen-api-request@lists.xensource.com?subject=help>
List-id: Discussion of API issues surrounding Xen <xen-api.lists.xensource.com>
List-post: <mailto:xen-api@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-api>, <mailto:xen-api-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-api>, <mailto:xen-api-request@lists.xensource.com?subject=unsubscribe>
References: <j2ape2$r6v$1@xxxxxxxxxxxxxxx> <4E48EDA4.8090203@xxxxxxxxx> <4E48F452.1050508@xxxxxxxxxxx> <1313409429.1945.32.camel@mabase>
Sender: xen-api-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:5.0) Gecko/20110624 Thunderbird/5.0
Hi George,
Hi List,

Am 15.08.11 13:57, schrieb George Shuklin:
> It not an XCP part in any way, it just storage configuration.
>
> We using lvmoiscsi target with enabled multipath in XCP, set up two
> hosts to use primary/primary DRBD (this done on usual debian without
> specific attention to XCP) and publish /dev/drbd1 on both hosts with
> same SN/ID.
>
> The scheme looks like:
>
> storage1
> (raid)----DBRD--IET~~~~~~~~~~~~[initiator+
>              |                 [XCP       \
> storage2     |                 [host       multipath----lvm-....
> (raid)----DRBD--IET~~~~~~~~~~~~[initiator/
So you have dedicated storage hosts that are connected via DRBD and
export both the storage to separate ISCSI clients on the XCP host, right?
I have a different scenario in mind, where you use commodity servers
(using raid1 and local storage) to manage a fault tolerant setup.
Do you have any experience with such a system based on DRBD?
>
> The main idea is just to make storage FT. Hosts is not FT, they only HA.
So FT is fault tolerant and HA is high available?
By this you imply that fault tolerant is better than high available?
(sorry for my ignorance for standard terms here).
>
> FT storage allow to keep client data in case of any storage fault, and
> allow even to bring storage down to maintenance. And XCP hosts pool
> allow to migrate clients machines for hosts maintenance. Only one not
> protected part is hung/crash of XCP host which require VM restart (with
> almost no data loss). 
>
> Network overhead of this scheme:
>
> XCP-to-storage link - almost no overhead compare to classic lvmoiscsi
> target.
> storage-to-storage link - double network load for writing, no overhead
> for reading.
>
> The main problem is to keep those hosts in consistent state (split-brain
> is very, very bad thing).
Thanks for the write up.
BTW: What version of XCP are you using / how are you developing packages
for XCP more recent than 0.5?

Cheers,
Jakob


>
> В Пнд, 15/08/2011 в 12:26 +0200, Jakob Praher пишет:
>> Hi George,
>>
>> thanks for the quick reply. As I already said CrossPool Migration is not
>> an option - but at least the wiki discusses a setup that mirrors via DRBD.
>> I am new to Xen-API yet we are using Xen hypervisor on Debian for half a
>> decade. So the basic underlying stuff is known to us, yet all the
>> concepts like SR, VDI, VHD, ... (which are also needed to abstract from
>> the physical representation) is new.
>>
>> How does this DRBD-backed ISCSI look like?
>> You export the DRBD block device via ISCSI protocol? Multipath means
>> that you can do active/active?
>> What is the network overhead when using this scenario to local LVM?
>> Is this the preferred scenario for HA on individual hosts having local
>> storage?
>> Can this be enabled in XCP 1.1?
>>
>> Regarding FT: Yes our scenario is FT since we use the SR locally. But we
>> would definitely like to setup a HA infrastructure since then the
>> decision on which machine the VM should be placed does not have to be
>> done at vm-install time, but can be dynamically balanced and also
>> xenmotion and all that stuff woul work.
>>
>> One of our goals is that we do not want to reinvent something.
>>
>> Cheers,
>> Jakob
>>
>>
>> Am 15.08.11 11:57, schrieb George Shuklin:
>>> Right now we running last tests before product deployment of
>>> DRBD-backed iscsi target with multipath. I found no specific problem
>>> at this moment (except the need to patching ISCSISR.py for complete
>>> multipath support). But I don' understand, why you need to do
>>> cross-pool migration for FT. In any way you can not achive FT with
>>> current XCP state, only HA.
>>>
>>> The difference between FT and HA: If server broken, FT-machine
>>> continue to run without any traces of fault, in case of HA machine
>>> just (almost instantly) restarting on other available hosts in the
>>> pool. FT is not magic key, because if VM do some bad thing (crashed)
>>> HA restart it, and FT will do nothing.
>>>
>>> On 15.08.2011 13:38, Jakob Praher wrote:
>>>> Dear List,
>>>>
>>>> I have a question regarding a fault-tolerant setup of XCP.
>>>> We have two hosts that are in one resource pool.
>>>> Furthermore we are trying to make two SRs (storage repos) as local lvm
>>>> volume groups where one volume group is active (owned) by one server,
>>>> and the other volume group is active on the other server.
>>>>
>>>> In case of failure because of the common resource pool all the meta
>>>> information concerning VMs are still available. After degrading the
>>>> system to one host the SR is still owned by the failed server. Is there
>>>> an easy way to migrate the SR? Is anybody using a similar solution or
>>>> what are your best practices?
>>>>
>>>> I think CrossPool-Migration is not an option for us since we want to
>>>> keep only one resource pool for both servers.
>>>>
>>>> Another question: I am currently using 1.1 of XCP - what is the best way
>>>> to compile system (like DRBD) RPMs for this version. Since the DDK is
>>>> not available I also have troubles getting the XCP distribition
>>>> installed into a guest VM so that I can add development packages to it
>>>> and compile the package there. Is there a base image that I can use so
>>>> that I have the right devel rpms? From yum repos.d I see that it is a
>>>> CentOS 5.
>>>>
>>>> Any help is appreaciated.
>>>>
>>>> Cheers,
>>>> Jakob
>>>>
>>>>
>>>> _______________________________________________
>>>> xen-api mailing list
>>>> xen-api@xxxxxxxxxxxxxxxxxxx
>>>> http://lists.xensource.com/mailman/listinfo/xen-api
>>> _______________________________________________
>>> xen-api mailing list
>>> xen-api@xxxxxxxxxxxxxxxxxxx
>>> http://lists.xensource.com/mailman/listinfo/xen-api
>>
>> _______________________________________________
>> xen-api mailing list
>> xen-api@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/mailman/listinfo/xen-api
>
>
> _______________________________________________
> xen-api mailing list
> xen-api@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/mailman/listinfo/xen-api


_______________________________________________
xen-api mailing list
xen-api@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/mailman/listinfo/xen-api