WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Shared storage and file-based VHDs

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-users] Shared storage and file-based VHDs
From: Bart Coninckx <bart.coninckx@xxxxxxxxxx>
Date: Mon, 18 Oct 2010 10:19:12 +0200
Cc: Craig Miskell <craig.miskell@xxxxxxxxxx>
Delivery-date: Mon, 18 Oct 2010 01:21:18 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4CBBBE87.1040307@xxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <4CBBBE87.1040307@xxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: KMail/1.13.5 (Linux/2.6.34.4-0.1-desktop; KDE/4.4.4; x86_64; ; )
On Monday 18 October 2010 05:27:03 Craig Miskell wrote:
> Hi,
>       This is related to the recent thread about best practices in using
> shared storage, but coming at it from a slightly different angle.
> 
> I'm setting up a pre-production/test environment using XCP; with how we
> plan on operating this system, there's going to be some pretty rampant
> snapshots and cloning of some reasonably large VHDs.  As such, I want to
> use file-based VHDs rather than LV-based, in order to take advantage of
> thin-provisioning to minimise disk space.  I'm happy with the performance
> hit this causes.
> 
> Further, I want to use shared storage so that I can have multiple hosts and
> can easily expand processing capacity as we spin up various instances, and
> do migrations.  However, I'm not using shared storage for auto failover or
> hot spare type functionality; migration will be manually managed as
> required.
> 
> So, from what I've been reading, I think I need one of the following two
> options:
> 
> 1) NFS.  Simple, understood technology.  Low overhead, and the XAPI
> toolstack takes care of "sharing" the VHDs.
> 
> 2) iSCSI, GFS(2), cLVM.  Storage LUN(s) presented by iSCSI, turned into an
> LV using cLVM, formatted with GFS or GFS2, and this filesystem added as a
> "file" type SR.  More complicated than NFS, and I've read there were some
> problems with GFS in this sort of scenario, to do with mounting via the
> loopback device.  But that was back a few years, and may have been solved,
> either in GFS or in GFS2.
> 
> Have I missed any other options?  Just pointers in the right direction
> (keywords) is enough if that's all you've got time for.
> 
> Is there anything glaringly wrong with my briefly written understanding of
> the options?
> 
> And does anyone have any comments on which is likely to be better?
> 
> Thanks,

Ow yes, I use iSCSI with nothing on it, as I use shared block devices, not 
image files. It rules out the complexity of GFS, cLVM or OCFS2. You do need a 
clustering software to prevent guest booting from the same storage twice.

Good luck,


B.


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>