WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] small cluster storage configuration?

Subject: Re: [Xen-users] small cluster storage configuration?
From: Miles Fidelman <mfidelman@xxxxxxxxxxxxxxxx>
Date: Mon, 10 Oct 2011 17:17:20 -0400
Cc: xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Mon, 10 Oct 2011 14:18:43 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4E935C70.1020403@xxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <4E921F8A.5070406@xxxxxxxxxxxxxxxx> <4E929DC4.6020304@xxxxxxxxxx> <AEC6C66638C05B468B556EA548C1A77D01E5E49C@trantor> <4E92F0AF.8070006@xxxxxxxxxxxxxxxx> <4E9352DF.708@xxxxxxxxxx> <4E93571D.8050206@xxxxxxxxxxxxxxxx> <4E935B57.1000604@xxxxxxxxxx> <4E935C70.1020403@xxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:7.0.1) Gecko/20110928 Firefox/7.0.1 SeaMonkey/2.4.1
Bart Coninckx wrote:
On 10/10/11 22:53, Bart Coninckx wrote:

Continued reading some more on VastSky. This seems to offer redundancy too, by means of mirroring. 4 nodes might be not enough for that though. Also, I wonder if it is suitable for things like live migration, which iSCSI and AoE can do.

VastSky seems to suffer from two problems:
- storage manager is a single point of failure
- development seems to have stopped in Oct. 2010

GlusterFS also does replication - question is whether it's performance is up to supporting VMs. Mixed responses so far, some suggestions that this is supposed to get a lot better in version 3.3 (currently in beta). Not sure the impact of Red Hat's acquisition of Gluster.

Starting to think that one approach would be to publish all 16 drives via AoE, then build one big md RAID10 array across them (linux RAID10 is interesting vis. standard RAID1+0 - it does mirroring and striping as a single operation, which uses disk space more efficiently). Trying to work through how things would respond in the event of node failures (4 drives go out at once).




--
In theory, there is no difference between theory and practice.
In<fnord>  practice, there is.   .... Yogi Berra



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users