This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-users] Re: iSCSI and LVM

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-users] Re: iSCSI and LVM
From: Bart Coninckx <bart.coninckx@xxxxxxxxxx>
Date: Sat, 19 Jun 2010 23:18:50 +0200
Cc: Ferenc Wagner <wferi@xxxxxxx>, James Harper <james.harper@xxxxxxxxxxxxxxxx>
Delivery-date: Sat, 19 Jun 2010 14:20:19 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <87fx0obc04.fsf@xxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <46C13AA90DB8844DAB79680243857F0F062078@xxxxxxxxxxxxxxxxxxx> <AEC6C66638C05B468B556EA548C1A77D019971B7@trantor> <87fx0obc04.fsf@xxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: KMail/1.12.4 (Linux/; KDE/4.3.5; x86_64; ; )
On Tuesday 15 June 2010 16:34:35 Ferenc Wagner wrote:
> "James Harper" <james.harper@xxxxxxxxxxxxxxxx> writes:
> >> "James Harper" <james.harper@xxxxxxxxxxxxxxxx> writes:
> >>>> You can use live migration in such setup, even safely if you back
> >>>> it by clvm.  You can even live without clvm if you deactivate your
> >>>> VG on all but a single dom0 before changing the LVM metadata in any
> >>>> way.  A non-clustered VG being active on multiple dom0s isn't a
> >>>> problem in itself and makes live migration possible, but you'd
> >>>> better understand what you're doing.
> >>>
> >>> You can't snapshot though. I tried that once years ago and it made a
> >>> horrible mess.
> >>
> >> Even if done after deactivating the VG on all but a single node?
> >> That would be a bug.  According to my understanding, it should work.
> >> I never tried, though, as snapshotting isn't my preferred way of
> >> making backups.  On the other hand I run domUs on snapshots of local
> >> LVs without any problem.  And an LV being "local" is a concept beyond
> >> LVM in the above setting, so it can't matter...
> >
> > A snapshot is copy-on-write. Every time the 'source' is written to, a
> > copy of the original block is saved to the snapshot (I may have that the
> > wrong way around).
> It's a little bit more complicated, but the basic idea is this.
> > Doing that though involves a remapping of the snapshot every time the
> > source is written to (eg block x isn't in the 'source' anymore, so
> > storage is allocated to it etc) which involves a metadata update.
> No, operation of the snapshot doesn't involve continuous *LVM* metadata
> updates, even though the chunk mapping is really metadata with respect
> to the block devices themselves.
> > So if the VG remained deactivated on all nodes for the life of the
> > snapshot then it may work, and maybe this is what you meant in which
> > case you are correct.
> Yes, I didn't elaborate, but this is my advice.
> > If the activated the VG on the other nodes after creating the snapshot
> > though, then problems may (will) arise!
> Only if you access data in the same LV from different hosts (metadata
> updates are also excluded, of course).  From this point of view, the
> origin and the snapshot LVs (and the cow device) must be considered the
> "same" LV.  Basically, this is why clvm does not support snapshots.  And
> of course I didn't consider cluster filesystems and similar above.
> I think we're pretty much on the same page.

I would like to especially for Jonathan add that snapshotting of virtual  
machines does not provide a safe way of backing them up, unless they are shut 
down first. 


Xen-users mailing list

<Prev in Thread] Current Thread [Next in Thread>