WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] san configuration with xen

On Sat, 2006-11-18 at 04:49 -0800, krabbit@xxxxxxxxxxxxx wrote:
> what type of FS may i use to grant concurrency  
> access and consistency to san for both servers? 

You need a cluster file system that syncs access across many machines,
so no 2 machines try to allocate the same group of inodes and end up
making a mess.

You can mount an ext3 file system read only from as many places as you
like... if it remains completely static. Any changes will result in all
mounts of that file system being different, any writes thereafter from a
remote node will corrupt it.

> i tried to format in a  
> normal ext3 fs, but when i act changes with server A, server B must  
> umount and remount partition to see that changes, and it doesn't  
> recognized the owner, group and permission of that file. could u help  
> me to follow a right procedure to prepare san for my test?

For what you want, I recommend ocfs2. Your kernel has modular support
for it. Get the ocfs2 tools via yum (not sure if available?) or at
oracle's site - http://oss.oracle.com/projects/ocfs2/

> I saw some emails that speak about nfs, Aoe, Gfs, clvm, but i think in  
> that way i need to have a daemon server, like nfs, that manage access.  
> there are aternatives?

ocfs2 is very easy, just install the tools, create your cluster config
file (simple plain text) and start the cluster service. 

Make your shared file system type ocfs2 (mkfs.ocfs2).

You can use the console dev that comes with ocfs2 tools to propagate
cluster changes to every node and simplify managing it.. or just use
some kind of simple key pair setup and script to keep all cluster.conf's
in sync.

Its a little easier to manage for your purposes but suitable for
production too.

Would be neat for you to post some benchmarks to the list with regards
to dom-u performance over your FC storage network .. and type of gear
you used :) helps others when deciding how to plan something similar.

> Thanx for all
> Regards
>              nicola
> 

Best,
-Tim

> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users
> 
> 


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>