WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Sharing file/folder

On Mon, Feb 14, 2011 at 1:37 PM, lucianobarreto@xxxxxxxxx
<lucianobarreto@xxxxxxxxx> wrote:
> I need to share some files between VMs. This files will be used to transfer
> some information (read/write). But I need do it without any network resource
> (NFS or others). I've tried to do it sharing a partition just for test
> proposes, but i see that when i create a file on one VM another can't see it
> and there isnt any concurrence in this approach.
> Anyone can help me??

to share files, you need a shared filesystem.  there are two main
classes of these:

- network filesystems: NFS, Samba, 9p, etc.  these work really well;
you shouldn't reject them without good reason.

- clustered filesystems: GFS, OCFS2, CXFS, etc. they're designed for
SAN systems where several hosts access the same storage box.  in VM
case, if you create a single partition accessible from several VMs you
get exactly the same situation, (shared block device) and need the
same solution.

what definitely won't work is to use a 'normal' filesystem (ext3/4,
XFS, ReiserFS, FAT, HPFS, NTFS, etc) on a shared partition (just like
it won't work in a shared block device).  Since every filesystem
aggressively caches metadata to avoid rereading the disk for every
access, a VM won't be 'notified' if another one modifies a directory,
so it won't 'notice' any change.  and worse, since now the cached
metadata isn't consistent with the content of the disk, any write will
result in a heavily corrupted filesystem.

better go with NFS

-- 
Javier

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>