WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-api

Re: [Xen-API] One resource pool and local lvm SR mirrored using DRBD

To: xen-api@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-API] One resource pool and local lvm SR mirrored using DRBD
From: George Shuklin <george.shuklin@xxxxxxxxx>
Date: Mon, 15 Aug 2011 15:57:09 +0400
Delivery-date: Mon, 15 Aug 2011 04:57:29 -0700
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=subject:from:to:in-reply-to:references:content-type:date:message-id :mime-version:x-mailer:content-transfer-encoding; bh=CMKKHKlTqQ8wPXE4KHKWE/+YnDTWG9exOI+EbkfanUc=; b=dFDVKflO4k04SM68N9nq6juOnSZU6LGX2UnE/DgTQjWlCqGXnSNkx2y2+ZYMMZ0Wd6 rP5WITdpSwjUFuEdgMVpVE6NFSMepsON9TcpzsbxboRn3tTixGrOknFe0LIeArsUeVv4 jcKcNn35dw++UPIFDAIPe8kPlHG3Ngs8RseHg=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4E48F452.1050508@xxxxxxxxxxx>
List-help: <mailto:xen-api-request@lists.xensource.com?subject=help>
List-id: Discussion of API issues surrounding Xen <xen-api.lists.xensource.com>
List-post: <mailto:xen-api@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-api>, <mailto:xen-api-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-api>, <mailto:xen-api-request@lists.xensource.com?subject=unsubscribe>
References: <j2ape2$r6v$1@xxxxxxxxxxxxxxx> <4E48EDA4.8090203@xxxxxxxxx> <4E48F452.1050508@xxxxxxxxxxx>
Sender: xen-api-bounces@xxxxxxxxxxxxxxxxxxx
It not an XCP part in any way, it just storage configuration.

We using lvmoiscsi target with enabled multipath in XCP, set up two
hosts to use primary/primary DRBD (this done on usual debian without
specific attention to XCP) and publish /dev/drbd1 on both hosts with
same SN/ID.

The scheme looks like:

storage1
(raid)----DBRD--IET~~~~~~~~~~~~[initiator+
             |                 [XCP       \
storage2     |                 [host       multipath----lvm-....
(raid)----DRBD--IET~~~~~~~~~~~~[initiator/


The main idea is just to make storage FT. Hosts is not FT, they only HA.

FT storage allow to keep client data in case of any storage fault, and
allow even to bring storage down to maintenance. And XCP hosts pool
allow to migrate clients machines for hosts maintenance. Only one not
protected part is hung/crash of XCP host which require VM restart (with
almost no data loss). 

Network overhead of this scheme:

XCP-to-storage link - almost no overhead compare to classic lvmoiscsi
target.
storage-to-storage link - double network load for writing, no overhead
for reading.

The main problem is to keep those hosts in consistent state (split-brain
is very, very bad thing).



В Пнд, 15/08/2011 в 12:26 +0200, Jakob Praher пишет:
> Hi George,
> 
> thanks for the quick reply. As I already said CrossPool Migration is not
> an option - but at least the wiki discusses a setup that mirrors via DRBD.
> I am new to Xen-API yet we are using Xen hypervisor on Debian for half a
> decade. So the basic underlying stuff is known to us, yet all the
> concepts like SR, VDI, VHD, ... (which are also needed to abstract from
> the physical representation) is new.
> 
> How does this DRBD-backed ISCSI look like?
> You export the DRBD block device via ISCSI protocol? Multipath means
> that you can do active/active?
> What is the network overhead when using this scenario to local LVM?
> Is this the preferred scenario for HA on individual hosts having local
> storage?
> Can this be enabled in XCP 1.1?
> 
> Regarding FT: Yes our scenario is FT since we use the SR locally. But we
> would definitely like to setup a HA infrastructure since then the
> decision on which machine the VM should be placed does not have to be
> done at vm-install time, but can be dynamically balanced and also
> xenmotion and all that stuff woul work.
> 
> One of our goals is that we do not want to reinvent something.
> 
> Cheers,
> Jakob
> 
> 
> Am 15.08.11 11:57, schrieb George Shuklin:
> > Right now we running last tests before product deployment of
> > DRBD-backed iscsi target with multipath. I found no specific problem
> > at this moment (except the need to patching ISCSISR.py for complete
> > multipath support). But I don' understand, why you need to do
> > cross-pool migration for FT. In any way you can not achive FT with
> > current XCP state, only HA.
> >
> > The difference between FT and HA: If server broken, FT-machine
> > continue to run without any traces of fault, in case of HA machine
> > just (almost instantly) restarting on other available hosts in the
> > pool. FT is not magic key, because if VM do some bad thing (crashed)
> > HA restart it, and FT will do nothing.
> >
> > On 15.08.2011 13:38, Jakob Praher wrote:
> >> Dear List,
> >>
> >> I have a question regarding a fault-tolerant setup of XCP.
> >> We have two hosts that are in one resource pool.
> >> Furthermore we are trying to make two SRs (storage repos) as local lvm
> >> volume groups where one volume group is active (owned) by one server,
> >> and the other volume group is active on the other server.
> >>
> >> In case of failure because of the common resource pool all the meta
> >> information concerning VMs are still available. After degrading the
> >> system to one host the SR is still owned by the failed server. Is there
> >> an easy way to migrate the SR? Is anybody using a similar solution or
> >> what are your best practices?
> >>
> >> I think CrossPool-Migration is not an option for us since we want to
> >> keep only one resource pool for both servers.
> >>
> >> Another question: I am currently using 1.1 of XCP - what is the best way
> >> to compile system (like DRBD) RPMs for this version. Since the DDK is
> >> not available I also have troubles getting the XCP distribition
> >> installed into a guest VM so that I can add development packages to it
> >> and compile the package there. Is there a base image that I can use so
> >> that I have the right devel rpms? From yum repos.d I see that it is a
> >> CentOS 5.
> >>
> >> Any help is appreaciated.
> >>
> >> Cheers,
> >> Jakob
> >>
> >>
> >> _______________________________________________
> >> xen-api mailing list
> >> xen-api@xxxxxxxxxxxxxxxxxxx
> >> http://lists.xensource.com/mailman/listinfo/xen-api
> >
> > _______________________________________________
> > xen-api mailing list
> > xen-api@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/mailman/listinfo/xen-api
> 
> 
> _______________________________________________
> xen-api mailing list
> xen-api@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/mailman/listinfo/xen-api



_______________________________________________
xen-api mailing list
xen-api@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/mailman/listinfo/xen-api