WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] small cluster storage configuration?

To: Miles Fidelman <mfidelman@xxxxxxxxxxxxxxxx>
Subject: Re: [Xen-users] small cluster storage configuration?
From: Iustin Pop <iusty@xxxxxxxxx>
Date: Mon, 10 Oct 2011 14:55:35 +0200
Cc: "xen-users@xxxxxxxxxxxxxxxxxxx" <xen-users@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Mon, 10 Oct 2011 05:58:37 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4E921F8A.5070406@xxxxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Mail-followup-to: Miles Fidelman <mfidelman@xxxxxxxxxxxxxxxx>, "xen-users@xxxxxxxxxxxxxxxxxxx" <xen-users@xxxxxxxxxxxxxxxxxxx>
References: <4E921F8A.5070406@xxxxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.21 (2010-09-15)
On Sun, Oct 09, 2011 at 06:26:18PM -0400, Miles Fidelman wrote:
> Hi Folks,
> 
> I've been running a 2-node, high-availability cluster for a while.
> I've just acquired 2 more servers, and I've been trying to figure
> out my options for organizing my storage configuration.
> 
> Basic goal: provide a robust, high-availability platform for
> multiple Xen VMs.
> 
> Current configuration (2 nodes):
> - 4 drives each (1TB/drive)
> - md software raid10 across the 4 drives on each machine
> -- md devices for Dom0 /boot / swap + one big device
> -- 2 logical volumes per VM (/ and swap)
> -- VM volumes replicated across both nodes, using DRBD
> -- pacemaker, heartbeat, etc. to migrate production VMs if a node fails
> 
> I now have 2 new servers - each with a lot more memory, faster CPUs
> (and more cores), also 4 drives each.  So I'm wondering what's my
> best option for wiring the 4 machines together as a platform to run
> VMs on.
> 
> Seems like my first consideration is how to wire together the
> storage, within the following constraints:
> 
> - want to use each node for both processing and storage (only have
> 4U of rackspace to play with, made the choice to buy 4 general
> purpose servers, with 4 drives each, rather than using some of the
> space for a storage server)
> 
> - 4 gigE ports per server - 2 reserved for primary/secondary
> external links, 2 reserved for storage & heartbeat comms.
> 
> - total of 16 drives, in groups of 4 (if a node goes down, it takes
> 4 drives with it) - so I can't simply treat this as 16 drives in one
> big array (I don't think)
> 
> - want to make things just a bit easier to manage than manually
> setting up pairs of DRBD volumes per VM
> 
> - would really like to make it easier to migrate a VM from any node
> to any other (for both load leveling and n-way fallback) - but DRBD
> seems to put a serious crimp in this
> 
> - sort of been keeping my eyes on some of the emerging cloud
> technologies, but they all seem to be aimed at larger clusters
> 
> - sheepdog seems like the closest thing to what I'm looking for, but
> it seems married at the hip to KVM (unless someone has ported it to
> support Xen while I wasn't looking)
> 
> So... just wondering - anybody able to share some thoughts and/or
> experiences?

Have you tried Ganeti (http://code.google.com/p/ganeti)? It uses DRBD
under the hood but it manages moving instances around without you having
to reconfigure instances. I think it matches what you're looking for,
and we support clusters from 1 physical machine up to hundreds.

Disclaimer: I'm one of the authors.

regards,
iustin

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users