Same again here, thought I'd chip in with some info from what I
experienced/noticed on my system. (Not seen the instability described though).
I hope these help with isolating the OPs issue.
On Friday 03 September 2010 10:06:45 Scott Garron wrote:
> On 8/31/2010 2:06 PM, Scott Garron wrote:
I also use LVMs extensively and do similar steps for backups.
1) umount in domU
3) lvcreate snapshot
5) mount in domU
I, however, have no need for HVM and only use PV guests.
> On a hunch, I copied the kernel config from my desktop to the
> server, recompiled with those options, booted into it, and tried
> triggering the bug. It took more than two tries this time around, but
> it became apparent pretty quickly that things weren't quite right.
> Creations and removals of snapshot volumes started causing lvm to return
> "/dev/dm-63: open failed: no such device or address" and something along
> the lines of (paraphrasing here) "unable to remove active logical
> volume" when the snapshot wasn't mounted or active anywhere, but a few
> seconds later, without changing anything, you could remove it. udev
> didn't seem to be removing the dm-?? devices from /dev, though.
I also, on occasion, get the same issue with the "unable to remove active
logical volume" even though they have been umounted.
Sometimes I can then remove them later, sometimes I have to "force" the
snapshot to fail by filling up the snapshot myself.
when that happens, I get similar messages about " /dev/dm-63: open failed: no
such device or address "
Are you certain the snapshots are large enough to hold all possible changes
that might occur on the LV during the existence of the snapshot?
Another thing I notice, which might be of help to people who understand this
better then I do, in my backup-script, sometimes step "5" fails because the
domU hasn't noticed the device is attached again when I try to mount it.
The domU-commands are run using SSH-connections.
Xen-devel mailing list