WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

AW: AW: [Xen-users] Shared volume: Software-ISCSI or GFS orOCFS2?

To: "Nick Couchman" <Nick.Couchman@xxxxxxxxx>
Subject: AW: AW: [Xen-users] Shared volume: Software-ISCSI or GFS orOCFS2?
From: "Rustedt, Florian" <Florian.Rustedt@xxxxxxxxxxx>
Date: Mon, 17 Nov 2008 16:23:33 +0100
Cc: xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Mon, 17 Nov 2008 07:24:17 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <492122C802000099000310DC@xxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <492122C802000099000310D9@xxxxxxxxxxxxxxxxxxxxx><492122C802000099000310DC@xxxxxxxxxxxxxxxxxxxxx> <492122C802000099000310DC@xxxxxxxxxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AclIw3xRmCW+ctXxS4udIZDIEy2y6AAA+PRg
Thread-topic: AW: [Xen-users] Shared volume: Software-ISCSI or GFS orOCFS2?
I am "swimming around" for choosing the right technology to implement shared 
partititions between several vms' like /usr, /lib and /lib/modules, etc.

For that, i need to mount them mostly RO, but on one host RW, so that this host 
is the one where i can install/delete software and that's applied to all 
readonly-connects. 

I tried that with an normal xfs-partition and it crashes if i mount it in 
different modes: the ro-clients got IO-errors.

So i decided to find out more ways and first tried lvm-snapshots mounted rw, 
but they crashed, too.

So far now, i am at the point that i think that the best way is to use a 
cluster-aware file-system on the partitions?

First i thought, iscsi has some kind of integrated data-locking-mechanism so i 
can mount it multiple times without errors, but in between i know, that this is 
done by the filesystem, so iscsi seems to be no more interesting any more..

So best advice would be to format my shared partitions with GFS or OCFS2 and 
use them shared?

Kind regards, Florian

-----Ursprüngliche Nachricht-----
Von: Nick Couchman [mailto:Nick.Couchman@xxxxxxxxx] 
Gesendet: Montag, 17. November 2008 15:53
An: Rustedt, Florian
Cc: xen-users@xxxxxxxxxxxxxxxxxxx
Betreff: Re: AW: [Xen-users] Shared volume: Software-ISCSI or GFS orOCFS2?

I'm not sure where or how you want to use software iSCSI - maybe you could 
provide a more thorough description of your Xen environment?

As far as OCFS vs GFS, you can use whichever you like.  I use OCFS2 for two 
reasons: first because it's included with SLES 10, and second because I find it 
slightly easier to configure than GFS.  It has its downsides, too, though, but 
works fine for me.  Use whichever cluster-aware FS you want.

-Nick
Nick Couchman
Manager, Information Technology
**********************************************************************************************
IMPORTANT: The contents of this email and any attachments are confidential. 
They are intended for the 
named recipient(s) only.
If you have received this email in error, please notify the system manager or 
the sender immediately and do 
not disclose the contents to anyone or make copies thereof.
*** eSafe scanned this email for viruses, vandals, and malicious content. ***
**********************************************************************************************


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>