WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Re: [Xen-users] How to share data between guest domains

Looks interesting.
There is also LVM persistent snapshots that allow doing something
similar to what you described.
We did not want to use LVM due to complexity it would bring in our setup.
IMHO, in setups with hundreds and thousands VMs using NFS with a few
powerful NAS boxes makes much more sense.
So, ideally, NFS copy-on-write is what we are looking for.
I know there is an alpha release of it out there, but the problem is we
need it now, not a few years from now when it reaches production quality.


Xin Zhao wrote:
> NFS combined with your solution is definitely a temporary solution.
> But our ongoing project "Virtual Librarian" should
> provide better support. VL is designed to allow mutliple VMs to share
> a base software environment. If a VM needs to modify a shared file,
> VL can do copy-on-write to create a private copy for this VM. All
> modifications are therefore visible only to
> this VM  In addition, if multiple private copies are identical, VL can
> merge them back to a shared copy.
>
> The benefits of VL inlcude:
> 1. A VM can take advantage of the global disk cache and benefit from
> previous data accesses from other VMs. We will expect better performance.
> 2. VL allows finer granularity of sharing, instead of directory level
> sharing.
> 3. The shared file system is transparent to guest applications and
> should be easily adopted.
> 4. VL allows centralized software updates. These updates can take
> effect right after the files are updated.
>
> We will put a detailed description of VL soon, if someone is
> interested in that. :)
>
> Xin
>
> Yura Pismerov wrote:
>> I found that using NFS for things like this makes much more sense.
>> You can run the domU with NFS root (read-only) and map certain areas you
>> need read/write
>> to tmpfs by mounting them with "mount --bind" in Linux). For example, if
>> I use NFS root and want my /etc
>> be writable I can always write its content to a tmpfs mounted area and
>> run "mount --bind /tmpfs/etc /etc".
>> This also will solve problems with centralized package updates when not
>> only /usr is being updated, but some other areas (eg. /etc, /var/lib).
>> You want those areas be shared between domU's as well.
>>
>>
>> Molle Bestefich wrote:
>>
>>  
>>> Todd D. Esposito wrote:
>>>  
>>>
>>>    
>>>> However, on that note, I wonder if you could mount the same file
>>>> system,
>>>> say something like /usr, into multiple domU's READ ONLY.
>>>>   
>>>>       
>>> That works for me.
>>>
>>> What doesn't work is mounting that file/device READ/WRITE in one domU
>>> to update the filesystem.  For that, I have to take down *all*
>>> domUs. Not good...
>>>
>>> (When I try I get a vbd: error saying "already in use".)
>>>
>>> (I know about caching and that I need eg. a cluster-aware filesystem
>>> to do this.)
>>>
>>> I've spent a couple of hours hunting through various Xen source files.
>>> There's a lot of Python functions that are only 3-5 lines long and
>>> which does little else than calling the next function, which makes it
>>> very hard to figure out what's going on :-/.
>>>
>>> Could one of you devel guys please let me know where I need to go to
>>> remove this silly limitation? :-)
>>>
>>> _______________________________________________
>>> Xen-users mailing list
>>> Xen-users@xxxxxxxxxxxxxxxxxxx
>>> http://lists.xensource.com/xen-users
>>>  
>>>
>>>     
>>
>>
>>   
>


-- 
Yuri Pismerov, System Administrator
Armor Technologies (Canada) Inc.

P: 905 305 1946 (x.3519)
http://www.armorware.net

Privacy Protection Guaranteed!





_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>