On Monday 18 October 2010 05:27:03 Craig Miskell wrote:
> Hi,
> This is related to the recent thread about best practices in using
> shared storage, but coming at it from a slightly different angle.
>
> I'm setting up a pre-production/test environment using XCP; with how we
> plan on operating this system, there's going to be some pretty rampant
> snapshots and cloning of some reasonably large VHDs. As such, I want to
> use file-based VHDs rather than LV-based, in order to take advantage of
> thin-provisioning to minimise disk space. I'm happy with the performance
> hit this causes.
>
> Further, I want to use shared storage so that I can have multiple hosts and
> can easily expand processing capacity as we spin up various instances, and
> do migrations. However, I'm not using shared storage for auto failover or
> hot spare type functionality; migration will be manually managed as
> required.
>
> So, from what I've been reading, I think I need one of the following two
> options:
>
> 1) NFS. Simple, understood technology. Low overhead, and the XAPI
> toolstack takes care of "sharing" the VHDs.
>
> 2) iSCSI, GFS(2), cLVM. Storage LUN(s) presented by iSCSI, turned into an
> LV using cLVM, formatted with GFS or GFS2, and this filesystem added as a
> "file" type SR. More complicated than NFS, and I've read there were some
> problems with GFS in this sort of scenario, to do with mounting via the
> loopback device. But that was back a few years, and may have been solved,
> either in GFS or in GFS2.
>
> Have I missed any other options? Just pointers in the right direction
> (keywords) is enough if that's all you've got time for.
>
> Is there anything glaringly wrong with my briefly written understanding of
> the options?
>
> And does anyone have any comments on which is likely to be better?
>
> Thanks,
Ow yes, I use iSCSI with nothing on it, as I use shared block devices, not
image files. It rules out the complexity of GFS, cLVM or OCFS2. You do need a
clustering software to prevent guest booting from the same storage twice.
Good luck,
B.
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|