This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] Making snapshot of logical volumes handling HVM domU cau

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-devel] Making snapshot of logical volumes handling HVM domU causes OOPS and instability
From: "J. Roeleveld" <joost@xxxxxxxxxxxx>
Date: Sun, 12 Sep 2010 11:41:46 +0200
Delivery-date: Sun, 12 Sep 2010 02:42:45 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4C80AC95.5080503@xxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <4C7864BB.1010808@xxxxxxxxxxxxxxxxxx> <4C7D44B0.9060105@xxxxxxxxxxxxxxxxxx> <4C80AC95.5080503@xxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: KMail/1.13.5 (Linux/2.6.30-gentoo-r5; KDE/4.4.5; x86_64; ; )
Hi All,

Same again here, thought I'd chip in with some info from what I 
experienced/noticed on my system. (Not seen the instability described though).
I hope these help with isolating the OPs issue.

On Friday 03 September 2010 10:06:45 Scott Garron wrote:
> On 8/31/2010 2:06 PM, Scott Garron wrote:


I also use LVMs extensively and do similar steps for backups.
1) umount in domU
2) block-detach
3) lvcreate snapshot
4) block-attach
5) mount in domU

I, however, have no need for HVM and only use PV guests.
(All Linux)

>       On a hunch, I copied the kernel config from my desktop to the
> server, recompiled with those options, booted into it, and tried
> triggering the bug.  It took more than two tries this time around, but
> it became apparent pretty quickly that things weren't quite right.
> Creations and removals of snapshot volumes started causing lvm to return
> "/dev/dm-63: open failed: no such device or address" and something along
> the lines of (paraphrasing here) "unable to remove active logical
> volume" when the snapshot wasn't mounted or active anywhere, but a few
> seconds later, without changing anything, you could remove it.  udev
> didn't seem to be removing the dm-?? devices from /dev, though.

I also, on occasion, get the same issue with the "unable to remove active 
logical volume" even though they have been umounted.
Sometimes I can then remove them later, sometimes I have to "force" the 
snapshot to fail by filling up the snapshot myself.
when that happens, I get similar messages about " /dev/dm-63: open failed: no 
such device or address "

Are you certain the snapshots are large enough to hold all possible changes 
that might occur on the LV during the existence of the snapshot?

Another thing I notice, which might be of help to people who understand this 
better then I do, in my backup-script, sometimes step "5" fails because the 
domU hasn't noticed the device is attached again when I try to mount it.
The domU-commands are run using SSH-connections.


Xen-devel mailing list