WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Shared storage in Xen Cluster

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-users] Shared storage in Xen Cluster
From: Mark Williamson <mark.williamson@xxxxxxxxxxxx>
Date: Fri, 4 Jan 2008 17:12:04 +0000
Cc: chetan saundankar <chetan.lists@xxxxxxxxx>
Delivery-date: Fri, 04 Jan 2008 09:12:41 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <2e3912590801031435s762fcbdcyb631c0b67ebd31f7@xxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <2e3912590801031435s762fcbdcyb631c0b67ebd31f7@xxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: KMail/1.9.6 (enterprise 0.20070907.709405)
> I am having following deployment scenario,
> - 2 Xen hosts (running Xen 3.1)
> - 1 Image server (NFS)
>
> Requirements:
> -----------------
> 1. Have an file system image of a linux distribution say Fedora 8 on
> Image (NFS) Server
> 2. I want to have 4 guests running Fedora 8 but for 4 different users.
> 3. Base file system image (Fedora 8 image on Image server) shared
> amongst 4 users in Read only fashion
> 4. All 4 users will have separate VBD's for user specific data.
>
> Question:
> ------------
> Is there any way to force users write on the data VBD's exported for
> each VM?
> The whole point is that guest user should not worry about where to
> write, every write call should be targeted towards data VBD and this
> needs to be achieved without changing anything in the guest.

As others have mentioned, using blktap with QCow disks seems like it might do 
what you want in terms of copy-on-write disks.

Serving disk images off NFS can be problematic sometimes.  *don't* use "file:" 
VBDs on top of NFS.  Using blktap should be ("tap:aio:" or "tap:qcow:") 
better.  I don't think using NFS is the best choice for performance and 
robustness but it is mighty convenient and you may find it works for you.

If you want to try other ways of sharing storage, you could try a cluster 
filesytem (e.g. OCFS2, GFS) on a shared block device (shared with iSCSI or 
NBD).  Or you could try to share the block level data directly rather than 
using a shared filesystem - that would probably have the best performance as 
it cuts out a layer of overhead.

Another thought: you could take a look at using some kind of storage server 
with disk snapshotting at the backend, in order to take care of all the COW 
operations on behalf of the clients.  Then you'd just get the image server to 
export all the domU block devices and it would take of the COW transparently.  
The Zumastor project provides COW block devices over the network - I'm not 
sure how ready for deployment it is, but it could be worth checking out.

Hope that helps give you a bit more background.  Good luck with your 
deployment!

Cheers,
Mark

-- 
Dave: Just a question. What use is a unicyle with no seat?  And no pedals!
Mark: To answer a question with a question: What use is a skateboard?
Dave: Skateboards have wheels.
Mark: My wheel has a wheel!

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>