WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

[Xen-users] Re: 100% safe way to backup domU: (was Yet another backup)

On Fri, 2007-01-12 at 20:59 +0530, Ligesh wrote:
> On Fri, Jan 12, 2007 at 10:13:08PM +0800, Tim Post wrote:
> > On Fri, 2007-01-12 at 17:01 +0530, Ligesh wrote:
> > > 
> > >  Thanks.
> > > 
> > 
> > I figured I'd chirp in. 
> > 
> > You guys are trying to make an exact science out of something really
> > dynamic, but I agree an application educated enough to pull this off is
> > sorely needed.
> 
> 
>   If we can't do exact science when we have got the kernel tighly under our 
> fists--or at least under an LVM--, then what's the point of it all? :-) 
>  
> > 
> > Lets look at a small paravirtualized domain running AMP, supporting 3 -
> > 5 virtual hosts, each of those vhosts is a blog, forum, wiki, something
> > database driven.
> > 
> > 2 Of them are just wordpress blogs, MyISAM tables. 3 of them use innodb
> > tables (row level locking).
> > 
> 

>  Yeah the thing is, virtuozzo has been in the industry for 5 years now, and 
> they 
> have been doing mostly live backups--though of course, it is always 
> recommended to shut 
> the vps down, it isn't mandatory like in the case of xen--. So recovering 
> from an application 
> crash seem to be pretty much possible, especially if it is a propery designed 
> software. 
> We need to scale to have the ability to manage 10,000 vpses without every 
> worrying about what's going on inside each one. 
> And this IS the real world situation, as far as hosting is concerned.

I think worrying about what's going on inside of each one *should* be a
goal :) Remember some key differences in VZ :

Burstable ram doesn't lend well to processes caching,
Doesn't lend well to each guest utilizing its own swap,
Single kernel

If *anyone* should be looking at each VM individually, its VZ because it
would be so easy for them to do so. 

All we need is a peep at /proc to determine if it is , or isn't a good
time to sync and take a snapshot.

100% swap usage with 100% inode use is *not* a good time to sync
disks :)


>  Anyway, again, we are missing the obvious. Linux has a software suspend 
> feature.
> The thing with software suspend is that it has to sync the disk properly, 
> since the system has to 
> work under normal bootup too, and thus merely saving the state won't be 
> sufficient to ensure data integrity. 

My point exactly.

> This is exactly what Mark has been pointing out. A sync-and-save implemented 
> in the domU. And it can actually ensure 100% safe backups. 
> So implementing the software suspend inside the domU is all that we need.
> 
>  xm sync-and-save domU file
>  snapshot domU
>  backup snapshot + file
>  xm restore domU
> 

Agreed, but where is the sanity check to ensure 'now is a good time to
sync' ? There *has* to be some sort of communication with the guest if
this is going to be automated, I would think?

My other point is , what happens with guests that have 200 GB root file
systems? That's a heap of HD space needed to back up each one. 

What we need (for disaster recovery) is :

/etc
/var
/home
/usr/local

.. and incremental thereof, on a typical (classic) linux system.

While I agree that lvm snapshots are the easiest way (now), if
developing something better, shouldn't a smaller option come into play?

I'd rather run a daemon on the guests (or kernel module) named xenbackup
than have to take a snapshot of every guest that needed a backup.

I don't mean to be argumentative, but real world also implies space
limitations for cost effective operation, cheap HD's, and hosts who sell
enormous chunks of them cheaply :)

Best,
--Tim



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>