WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Multiple Domains Sharing Root System

To: Xen-users@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-users] Multiple Domains Sharing Root System
From: Chris de Vidal <chris@xxxxxxxxxx>
Date: Mon, 19 Sep 2005 17:32:41 -0700 (PDT)
Delivery-date: Tue, 20 Sep 2005 00:30:28 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <432F36FB.6040302@xxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Reply-to: chris@xxxxxxxxxx
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
--- Ghe Rivero <ghe.rivero@xxxxxxxxx> wrote:
>       i need to implement multiples domains, being most of then almost the
> same with minimal changes (same distro, same packages, different
> configurations).
> 
>       Anybody knows if there is a way to share all the common files? This way
> all updates will need to be done just one and the disk space will be
> much minor.

I've never done this but am thinking about doing it.


Some thoughts:
* Use a Copy On Write (COW) filesystem.  The idea is that you can have 2 or 3
or 10 or 1,000 servers sharing the same root and only changes to the base are
recorded.

I've never used this, but it doesn't seem to live up to the promise of saving
hard drive space; if you think about it, over time as you install patches and
upgrades, eventually you'll end up using the same amount of hard drive space
that you'd have used without COW.  For example, imagine if you upgrade your
installation of CentOS to a new version with new versions of glibc and whatever
-- lots and lots of libraries have been compiled against the new libraries and
so they would be upgraded.  Over time the large majority of files will have
been upgraded and you're stuck with 2 or 3 or 10 or 1,000 individual copies of
a file.

Perhaps you can shrink the partitions by synchronizing them all, moving
duplicate files back to the original filesystem.

I'm ignorant on the use of COW FSes so this might not even be a concern.

A COW doesn't give you the quick-update ability you want... in other words,
you'd still have to update each system one-at-a-time.  For that, you should
consider...

* ...NFS exporting a read-only copy of /usr.  This is usually your largest
partition where most updates occur.  Well-written programs will not require
/usr be mounted read-write and you should be able to export at least that
partition.  You can do updates very quickly.  This is the direction I want to
go.

You could even use thin-client network boot technology so that your domains
don't use *any* hard drive space.

You can test your app by installing a new installation of linux or unix and
giving /usr its own partition.  Install the program you want to test.  Edit
/etc/fstab and give the /usr partition the ro flag something like this:
LABEL=/usr  /usr  ext3  defaults,ro  1  2
Remount /usr:
mount -o remount /usr

Or do it without editing fstab (does not persist over reboots):
mount -o ro,remount /usr

Then run your app and see if it bombs.  If it works, you can use a read-only
NFS-mounted /usr partition.

Note: This only shares /usr.  If you install an update that modifies a file
under /etc /var or /boot you will need to manually copy those updates.  It is
not wise to share /etc or /var (they are usually thought of as the place where
system-specific and variable data lives) and /boot /sbin and /lib is usually
needed before NFS filesystems can be mounted (usually).  So any updates to
these partitions must be done manually.

I suppose the main server could run an update and then the NFS clients could
run the same update, ignoring any /usr "read-only" errors.  Seems like it would
work, but then that takes the same amount of time as updating individual
servers.


Or you could just...

* ...bite the bullet and do it the old-fashioned way.  A well-tuned OS doesn't
take up much room compared to swap and data.  Most of my installs are a few
hundred MB (I kill the documentation and only install what I need).  The
average Xen system probably has a dozen domains, so that's around 10GB.  That's
nothing with today's drives.

Doesn't give you quick-update ability but you can use something like yum or
apt.  I've installed both yum and apt servers; they're no big deal.

Hope that helps!

CD

You have to face a Holy God on Judgment Day. He sees lust as adultery (Matt. 
5:28) and hatred as murder (1 John 3:15). Will you be guilty? 

Jesus took your punishment on the cross, and rose again defeating death, to 
save you from Hell. Repent (Luke 13:5) and trust in Him today.

NeedGod.com

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users