WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-api

Re: [Xen-API] One resource pool and local lvm SR mirrored using DRBD

To: George Shuklin <george.shuklin@xxxxxxxxx>
Subject: Re: [Xen-API] One resource pool and local lvm SR mirrored using DRBD
From: Jakob Praher <jakob@xxxxxxxxxxx>
Date: Mon, 15 Aug 2011 20:03:46 +0200
Cc: xen-api@xxxxxxxxxxxxxxxxxxx
Delivery-date: Mon, 15 Aug 2011 11:04:27 -0700
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/simple; d=praher.info; h= content-transfer-encoding:content-type:in-reply-to:references :subject:to:mime-version:user-agent:from:date:message-id :x-virus-scanned; s=dkim; t=1313431433; x=1314295433; bh=fRAXFOT AM5V4xAV2w4hwnSQQn1fJsVcz+fLiubZ0CiY=; b=C2FVfaQiaPcnlLl8Mkk1inW tZMBfec62KhmVNwb/9Ts0eBq9+L3W/nAxCP6u6rzAWyAhGVYLeqxRAlGdwrJLy8P RoNBKSjYAD+kNxBy+yC78Z+yJ0M/SrR+rUP+8c9l9wopbX3AC4W9wiyONat/4W4l 8mZWruF80MViUJdoqMFk=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <1313429260.1945.41.camel@mabase>
List-help: <mailto:xen-api-request@lists.xensource.com?subject=help>
List-id: Discussion of API issues surrounding Xen <xen-api.lists.xensource.com>
List-post: <mailto:xen-api@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-api>, <mailto:xen-api-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-api>, <mailto:xen-api-request@lists.xensource.com?subject=unsubscribe>
References: <j2ape2$r6v$1@xxxxxxxxxxxxxxx> <4E48EDA4.8090203@xxxxxxxxx> <4E48F452.1050508@xxxxxxxxxxx> <1313409429.1945.32.camel@mabase> <4E494896.3060007@xxxxxxxxxxx> <1313429260.1945.41.camel@mabase>
Sender: xen-api-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:5.0) Gecko/20110624 Thunderbird/5.0
Dear George,

you find my answer inline.

Am 15.08.11 19:27, schrieb George Shuklin:
> Yes.
>
> XCP have two modes: with local storage (small installation) and with
> external shared storage. You will not able to use XCP normally if you
> don't use external storage. Migration can be done only with shared
> storage, HA features and so on.
Yeah I think I am well aware of that fact.
> There is some discussion early about using drbd in XCP, but they are
> still not implemented in anyway.
I noticed the threads.
>
> Right now we using XCP 0.5 and starting migraition process to XCP 1.0.
> Both of them do have DDK you can use for compilation of modules. 
Ok. I am using XCP 1.1 beta - but I would like to also compile a RPM for
that.
> FT/HA relationship more complificated then 'better/worse', they solves
> different problems and asking for different price for this (f.e.
> completely FT machine will run slower that normal one).
I have to look into the definition of these. Maybe I what I refer to FT
is just redundancy - so in that sense FT is already the thing I want to
achieve right now.
> Please note that XCP does not contain code for native XenServer HA - it
> depends on Windows (as far as I understand), the 'HA' in XCP limited to
> domain restart in case of crash/host reboot.
Okay. See above I just want the thing to be fault tolerant, I can live
with having to restart a VM in case of a host crash right now.
I only want to make sure I can start the system on the still running
server (given that I only utilize 50 % of both systems).

Cheers,
Jakob
>
> В Пнд, 15/08/2011 в 18:25 +0200, Jakob Praher пишет:
>> Hi George,
>> Hi List,
>>
>> Am 15.08.11 13:57, schrieb George Shuklin:
>>> It not an XCP part in any way, it just storage configuration.
>>>
>>> We using lvmoiscsi target with enabled multipath in XCP, set up two
>>> hosts to use primary/primary DRBD (this done on usual debian without
>>> specific attention to XCP) and publish /dev/drbd1 on both hosts with
>>> same SN/ID.
>>>
>>> The scheme looks like:
>>>
>>> storage1
>>> (raid)----DBRD--IET~~~~~~~~~~~~[initiator+
>>>              |                 [XCP       \
>>> storage2     |                 [host       multipath----lvm-....
>>> (raid)----DRBD--IET~~~~~~~~~~~~[initiator/
>> So you have dedicated storage hosts that are connected via DRBD and
>> export both the storage to separate ISCSI clients on the XCP host, right?
>> I have a different scenario in mind, where you use commodity servers
>> (using raid1 and local storage) to manage a fault tolerant setup.
>> Do you have any experience with such a system based on DRBD?
>>> The main idea is just to make storage FT. Hosts is not FT, they only HA.
>> So FT is fault tolerant and HA is high available?
>> By this you imply that fault tolerant is better than high available?
>> (sorry for my ignorance for standard terms here).
>>> FT storage allow to keep client data in case of any storage fault, and
>>> allow even to bring storage down to maintenance. And XCP hosts pool
>>> allow to migrate clients machines for hosts maintenance. Only one not
>>> protected part is hung/crash of XCP host which require VM restart (with
>>> almost no data loss). 
>>>
>>> Network overhead of this scheme:
>>>
>>> XCP-to-storage link - almost no overhead compare to classic lvmoiscsi
>>> target.
>>> storage-to-storage link - double network load for writing, no overhead
>>> for reading.
>>>
>>> The main problem is to keep those hosts in consistent state (split-brain
>>> is very, very bad thing).
>> Thanks for the write up.
>> BTW: What version of XCP are you using / how are you developing packages
>> for XCP more recent than 0.5?
>>
>> Cheers,
>> Jakob
>>
>>
>>> В Пнд, 15/08/2011 в 12:26 +0200, Jakob Praher пишет:
>>>> Hi George,
>>>>
>>>> thanks for the quick reply. As I already said CrossPool Migration is not
>>>> an option - but at least the wiki discusses a setup that mirrors via DRBD.
>>>> I am new to Xen-API yet we are using Xen hypervisor on Debian for half a
>>>> decade. So the basic underlying stuff is known to us, yet all the
>>>> concepts like SR, VDI, VHD, ... (which are also needed to abstract from
>>>> the physical representation) is new.
>>>>
>>>> How does this DRBD-backed ISCSI look like?
>>>> You export the DRBD block device via ISCSI protocol? Multipath means
>>>> that you can do active/active?
>>>> What is the network overhead when using this scenario to local LVM?
>>>> Is this the preferred scenario for HA on individual hosts having local
>>>> storage?
>>>> Can this be enabled in XCP 1.1?
>>>>
>>>> Regarding FT: Yes our scenario is FT since we use the SR locally. But we
>>>> would definitely like to setup a HA infrastructure since then the
>>>> decision on which machine the VM should be placed does not have to be
>>>> done at vm-install time, but can be dynamically balanced and also
>>>> xenmotion and all that stuff woul work.
>>>>
>>>> One of our goals is that we do not want to reinvent something.
>>>>
>>>> Cheers,
>>>> Jakob
>>>>
>>>>
>>>> Am 15.08.11 11:57, schrieb George Shuklin:
>>>>> Right now we running last tests before product deployment of
>>>>> DRBD-backed iscsi target with multipath. I found no specific problem
>>>>> at this moment (except the need to patching ISCSISR.py for complete
>>>>> multipath support). But I don' understand, why you need to do
>>>>> cross-pool migration for FT. In any way you can not achive FT with
>>>>> current XCP state, only HA.
>>>>>
>>>>> The difference between FT and HA: If server broken, FT-machine
>>>>> continue to run without any traces of fault, in case of HA machine
>>>>> just (almost instantly) restarting on other available hosts in the
>>>>> pool. FT is not magic key, because if VM do some bad thing (crashed)
>>>>> HA restart it, and FT will do nothing.
>>>>>
>>>>> On 15.08.2011 13:38, Jakob Praher wrote:
>>>>>> Dear List,
>>>>>>
>>>>>> I have a question regarding a fault-tolerant setup of XCP.
>>>>>> We have two hosts that are in one resource pool.
>>>>>> Furthermore we are trying to make two SRs (storage repos) as local lvm
>>>>>> volume groups where one volume group is active (owned) by one server,
>>>>>> and the other volume group is active on the other server.
>>>>>>
>>>>>> In case of failure because of the common resource pool all the meta
>>>>>> information concerning VMs are still available. After degrading the
>>>>>> system to one host the SR is still owned by the failed server. Is there
>>>>>> an easy way to migrate the SR? Is anybody using a similar solution or
>>>>>> what are your best practices?
>>>>>>
>>>>>> I think CrossPool-Migration is not an option for us since we want to
>>>>>> keep only one resource pool for both servers.
>>>>>>
>>>>>> Another question: I am currently using 1.1 of XCP - what is the best way
>>>>>> to compile system (like DRBD) RPMs for this version. Since the DDK is
>>>>>> not available I also have troubles getting the XCP distribition
>>>>>> installed into a guest VM so that I can add development packages to it
>>>>>> and compile the package there. Is there a base image that I can use so
>>>>>> that I have the right devel rpms? From yum repos.d I see that it is a
>>>>>> CentOS 5.
>>>>>>
>>>>>> Any help is appreaciated.
>>>>>>
>>>>>> Cheers,
>>>>>> Jakob
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> xen-api mailing list
>>>>>> xen-api@xxxxxxxxxxxxxxxxxxx
>>>>>> http://lists.xensource.com/mailman/listinfo/xen-api
>>>>> _______________________________________________
>>>>> xen-api mailing list
>>>>> xen-api@xxxxxxxxxxxxxxxxxxx
>>>>> http://lists.xensource.com/mailman/listinfo/xen-api
>>>> _______________________________________________
>>>> xen-api mailing list
>>>> xen-api@xxxxxxxxxxxxxxxxxxx
>>>> http://lists.xensource.com/mailman/listinfo/xen-api
>>>
>>> _______________________________________________
>>> xen-api mailing list
>>> xen-api@xxxxxxxxxxxxxxxxxxx
>>> http://lists.xensource.com/mailman/listinfo/xen-api
>>
>> _______________________________________________
>> xen-api mailing list
>> xen-api@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/mailman/listinfo/xen-api
>


_______________________________________________
xen-api mailing list
xen-api@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/mailman/listinfo/xen-api