[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] xl block-attach vs block-detach



On Fri, 2012-03-02 at 07:53 +0000, Jan Beulich wrote:
> >>> On 01.03.12 at 18:30, Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx> wrote:
> >> Further, why is it that with no blktap module loaded I'm getting an
> >> incomplete attach when using the (deprecated) file:/ format for
> >> specifying the backing file? It reports that it would be using qdisk,
> >> and blkfront also sees the device appearing, but all I'm seeing in the
> >> kernel log is the single message from blkfront's probe function. (With
> >> no blktap in pv-ops, I wonder how file backed disks work there.)
> >> When trying to detach such a broken device I'm getting
> >> "unrecognized disk backend type: 0", and the remove fails.
> > 
> > That might well be a bug.  In addition to Ian's questions, what do you
> > get if you turn on the debug by passing xl lots of -v flags (before
> > the block-attach) ?
> 
> + xl -vvvvv block-attach 0 
> file:/srv/SuSE/SLES-11-SP1-MINI-ISO-x86_64-GMC3-CD.iso 0xca00 r
> libxl: debug: libxl_device.c:183:libxl__device_disk_set_backend: Disk 
> vdev=0xca00 spec.backend=unknown
> libxl: debug: libxl_device.c:137:disk_try_backend: Disk vdev=0xca00, backend 
> phy unsuitable as phys path not a block device
> libxl: debug: libxl_device.c:144:disk_try_backend: Disk vdev=0xca00, backend 
> tap unsuitable because blktap not available
> libxl: debug: libxl_device.c:219:libxl__device_disk_set_backend: Disk 
> vdev=0xca00, using backend qdisk
> libxl: debug: libxl_device.c:183:libxl__device_disk_set_backend: Disk 
> vdev=0xca00 spec.backend=qdisk
> xc: debug: hypercall buffer: total allocations:2 total releases:2
> xc: debug: hypercall buffer: current allocations:0 maximum allocations:2
> xc: debug: hypercall buffer: cache current size:2
> xc: debug: hypercall buffer: cache hits:0 misses:2 toobig:0
> + xl -vvvvv block-attach 0 
> file:/srv/SuSE/SLES-11-SP1-MINI-ISO-x86_64-GMC3-CD.iso 0xca00 r
> libxl: debug: libxl_device.c:183:libxl__device_disk_set_backend: Disk 
> vdev=0xca00 spec.backend=unknown
> libxl: debug: libxl_device.c:137:disk_try_backend: Disk vdev=0xca00, backend 
> phy unsuitable as phys path not a block device
> libxl: debug: libxl_device.c:144:disk_try_backend: Disk vdev=0xca00, backend 
> tap unsuitable because blktap not available
> libxl: debug: libxl_device.c:219:libxl__device_disk_set_backend: Disk 
> vdev=0xca00, using backend qdisk
> libxl: debug: libxl_device.c:183:libxl__device_disk_set_backend: Disk 
> vdev=0xca00 spec.backend=qdisk
> xc: debug: hypercall buffer: total allocations:2 total releases:2
> xc: debug: hypercall buffer: current allocations:0 maximum allocations:2
> xc: debug: hypercall buffer: cache current size:2
> xc: debug: hypercall buffer: cache hits:0 misses:2 toobig:0
> + xl -vvvvv block-detach 0 51712
> libxl: error: libxl.c:1223:libxl__device_from_disk: unrecognized disk backend 
> type: 0
> 
> libxl_device_disk_remove failed.
> xc: debug: hypercall buffer: total allocations:2 total releases:2
> xc: debug: hypercall buffer: current allocations:0 maximum allocations:2
> xc: debug: hypercall buffer: cache current size:2
> xc: debug: hypercall buffer: cache hits:0 misses:2 toobig:0
> + xl -vvvvv block-detach 0 51712
> libxl: error: libxl.c:1223:libxl__device_from_disk: unrecognized disk backend 
> type: 0
> 
> libxl_device_disk_remove failed.
> xc: debug: hypercall buffer: total allocations:2 total releases:2
> xc: debug: hypercall buffer: current allocations:0 maximum allocations:2
> xc: debug: hypercall buffer: cache current size:2
> xc: debug: hypercall buffer: cache hits:0 misses:2 toobig:0
> 
> > Can you attach the disk by naming it in the config file ?
> 
> Didn't try, for the purpose at hand I want the disk attached to Dom0.

AH, I bet that is it -- it is very unlikely that dom0 has a qemu which
would process the qdisk backend stuff.

Hrm, in fact I wonder if block-attach handles starting a qemu at all if
one isn't already running. I also wonder how well qemu handles hotplug
of disks if it is running.

I think you may have opened a can of works here. Hopefully someone will
correct me but I expect there is work to be done here...

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.