WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-api

Re: [Xen-API] Alternative to Vastsky?

To: xen-api@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-API] Alternative to Vastsky?
From: Tim Titley <tim@xxxxxxxxxx>
Date: Wed, 20 Apr 2011 11:47:04 +0100
Delivery-date: Wed, 20 Apr 2011 03:49:37 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4DAE2E12.9000807@xxxxxxxxx>
List-help: <mailto:xen-api-request@lists.xensource.com?subject=help>
List-id: Discussion of API issues surrounding Xen <xen-api.lists.xensource.com>
List-post: <mailto:xen-api@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-api>, <mailto:xen-api-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-api>, <mailto:xen-api-request@lists.xensource.com?subject=unsubscribe>
References: <4DAE085F.5040707@xxxxxxxxxx> <4DAE0ADD.70209@xxxxxxxxx> <4DAE2454.1060101@xxxxxxxxxx> <4DAE2E12.9000807@xxxxxxxxx>
Sender: xen-api-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.15pre) Gecko/20110207 Lightning/1.0b2 Shredder/3.1.9pre
On 20/04/11 01:51, George Shuklin wrote:
I see no problem with split brains in case of DRBD between two XCP
hosts (with DRBD between local drive in fist XCP host and second drive
via network on second XCP host). XCP assure there is no two copy of
same VM running in pool (we talking about XCP, not xend?). If some
pool suddenly go offline or disconnected (same thing), you must
manually say vm-reset-powerstate. I think this kind of protection is
fairy normal, except it will delay automatic restart in case of
unexpected host hangup - but in case of XCP this problem exists for
every storage solution - problem is not with storage but with XCP way
to detects HOST_OFFLINE (only after long delay XCP will assume host
down... or never? I still not test this well).

The main sad thing in DRBD is two host limit, but it still better,
than plain /dev/sd[abcde] for pack of 'mission critical applications
with new level of performance and effibla-bla-bla'. And (as far as I
know XCP internals) it have all capabilities (may be with little
tweaking) to get DRBD support at logic level. We have shared SR with
two PBD on two hosts. We calculate vm-vbd-vdi-sr-pbd-host paths before
sending task to slave (start/migrate/evacuate), we accounting them
before returning calculated ha-avability (forgot exact names).  To
avoid 'tripple confilct' we allow only one DRBD per host: if A have
two different DRBD with B and C, B have same with C and A, and C with
B and A and we create vm with two vdi on fist and second DRBD volumes,
we lost any way for successful migration (and, in certain meaning,
loose some redundancy).

I have not considered using a separate DRBD resource for each vdi for this very reason however, if you are sticking to a simple paired host setup there are advantages in putting an LVM storage repository on top of one large DRBD disk mirrored between both hosts (assuming you don't loose network connectivity).

Than you for reply about two iscsi target for same drbd... I have a
little doubts about data consistency due iscsi queue...

I never had a problem, probably because it was setup for fail-over not performance. I would be inclined to agree with you.

The last: I DO really wants to see 2.6.38+ in XCP. In 2.6.38 Red Hat
has add support for blkio-throttle in this version - most wanted
feature for dom0 - its allow to shape IOPS and bandwidth for every
process separately (this means 'for every VM'). We have (not very
good, but working) traffic shaper, so disk shaper is very actual too...


_______________________________________________
xen-api mailing list
xen-api@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/mailman/listinfo/xen-api