WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Questions on qcow, qcow2 versus LVM

To: "Fajar A. Nugraha" <fajar@xxxxxxxxx>
Subject: Re: [Xen-users] Questions on qcow, qcow2 versus LVM
From: "Matthew Law" <matt@xxxxxxxxxxxxxxxxxx>
Date: Tue, 29 Dec 2009 23:26:00 -0000
Cc: xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Tue, 29 Dec 2009 15:26:43 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
Importance: Normal
In-reply-to: <7207d96f0912291404h24d7daeat3ea634edc368357f@xxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <2ff3192ddfc512732cd0a6955fa51595.squirrel@xxxxxxxxxxxxxxxxxxxxxx> <4B2F776A.9000406@xxxxxxxxx> <eb42866b97a96eb5bd57b22e47cde8cc.squirrel@xxxxxxxxxxxxxxxxxxxxxx> <7207d96f0912240451m76d65118y5fb9a32ed63f411b@xxxxxxxxxxxxxx> <83c17d8cd6753530b00f134159151864.squirrel@xxxxxxxxxxxxxxxxxxxxxx> <7207d96f0912240624hf1d0d17w658048dac8311cb5@xxxxxxxxxxxxxx> <f17de79d17fea73fa0ea0c22259f779d.squirrel@xxxxxxxxxxxxxxxxxxxxxx> <7207d96f0912241358o1bdcf15bi1514f6628b86068d@xxxxxxxxxxxxxx> <aa8fed56d2dbf587df790cda9f9525de.squirrel@xxxxxxxxxxxxxxxxxxxxxx> <7207d96f0912291404h24d7daeat3ea634edc368357f@xxxxxxxxxxxxxx>
Reply-to: matt@xxxxxxxxxxxxxxxxxx
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: SquirrelMail/1.4.19
On Tue, December 29, 2009 10:04 pm, Fajar A. Nugraha wrote:
> Another way to is to investigate why your earlier setup has problems.
> To eliminate partition problems, you can map the disk to dom0  like
> this:
>
> modprobe xenblk
> xm block-attach 0 phy:/dev/vg_name/lv_name xvda w
> ### do your stuff here. fdisk xvda, mkfs, ta, whatever. Use fdisk
> instead of parted.
> ### don't forget to umount afterwards
> xm block-list 0
> xm block-detach 0 51712 <== 51712 is the devid for xvda
>
> If that works, then it's 100% confirmed the problem is with
> parted/kpartx. Repeat the test, but this time using parted instead of
> fdisk, and you get the idea :D

Thanks, Fajar! Using this method I could create a single partition on the
LV with fdisk, format it as ext3, mount it and untar a vm image on it and
boot the vm with pvgrub as before.  I then xm destroyed the domU and
removed the LV with no problems - result!

After this I set about trying to find which of the previous operations was
holding the LV in the open state, so I started again with a clean lv and
incrementally performed each operation on it and tried to remove it.  The
error occurs after running:

parted /dev/VolGroupVM/testvm mkpartfs primary ext2 0 10240

So, parted is the culprit (or at least the first one to cause the
problem).  Is there perhaps another, scriptable way to create the
partitions on the LV?

Matt.


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users