xen-users
Re: [Xen-users] snapshot - backup for xen vm's
Fajar A. Nugraha wrote:
Nico Kadel-Garcia wrote:
Mirror the whole RedHat DVD or network install site. Copy it to local
disk on the domain creating machine: you want this all to happen on
local disk, *NOT* over a network! Make a new partition, and mount it.
Copy the whole thing over to /var/cache/yum/base/packages and work
from there to install whatever you want, usually with yum. Do an
alternative root directory RPM or Yum installation into that
directory. But using yum, it downloads the files: it's vastly faster
to simply copy them over and use a "file:///" based URL, rather than
the classic HTTP or FTP URL's. be already available or wind up
downloaded into /var/cache/yum in the target directory, but you have a
huge advantage in being able to do the updates or add-on packages at
OS image install time, rather than wending your way through
In my setup I use a local yum repository via http.
I have several installation methods, actually :
- for PV domUs : use tar.gz templates.
- for HVM domUs and physical machine : Use custom RH installer, either
via CD/DVD or network (PXE) install.
Working via HTTP is usually far, far slower than working via local file
access to simply copy the repository.
It's even *faster* if you don't build your partition first, use "cp -al"
to copy the repository to the chrooted /var/cache/yum/packages, and run
a chrooted "yum clean all" when you'r edopne. Then you build the tarball
from a directory, not a partition.
The tar.gz fast is signifcantly faster than rpm-based install (including
yum --installroot), even when the RPMS is on local disk (I tried this
before), partly due to the fact that RPM needs extra steps (dep
resolution, pre/post install scripts, etc.) other than simply writing
files like the tar.gz. It has all necessary add-on packages included.
You still need to do a "yum-upgrade" afterwards, but the total time
required is still a lot faster.
Oh, yes. I've done..... roughly 15,000 Linux servers in one month that
way. I referred to the RPM/Yum games as ways of building your initial
tarballs, without having to worry about a machine or environment to get
that initial install into.
The custom RH installer is basically the standard RHEL 5 installer with
a kickstart file that adds additional yum repository and installs some
local packages. It check files from CD/DVD (if available) and network
repository, chooses the newest version, and performs the installation.
This way it performs updates and installs add-on packages at OS image
install time. The CD/DVD is created from an ISO image that is updated
weekly to contain only the newest packages in addition to add-on
packages. You still get newest packages at install time even if you use
an old DVD since it will automatically compare the local version with
the one on network repository and choose whichever is newest.
Yup, I've used that sort of technique myself and highly approve of it.
You do need to keep it vaguely up-to-date to keep installations fast:
I've used it for network booting Beowulf clusters, which got wiped and
re-installed every time they rebooted.
Network BW is not an issue since I'm working on local, 1Gbps network.
Besides, "Mirror the whole RedHat DVD or network install site. Copy it
to local disk on the domain" would use up that BW anyway. Again, this
was the best option for me but it might not be suitable for everybody.
Nah, you leave it on a build machine from which you generate your
tarballs, or do it on your HTTP server. The download directory is left
around for the next session, not downloaded each time. It avoids the
HTTP download time you mentioned above.
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|
|
|