[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH V10 1/3] libxl: Add support for Virtio disk configuration




On 17.06.22 17:29, Anthony PERARD wrote:


Hello Anthony

On Wed, Jun 15, 2022 at 07:32:54PM +0300, Oleksandr wrote:
On 15.06.22 17:23, Anthony PERARD wrote:
On Mon, Jun 13, 2022 at 09:05:20PM +0300, Oleksandr Tyshchenko wrote:
diff --git a/tools/xl/xl_block.c b/tools/xl/xl_block.c
index 70eed43..f2b0ff5 100644
--- a/tools/xl/xl_block.c
+++ b/tools/xl/xl_block.c
@@ -50,6 +50,11 @@ int main_blockattach(int argc, char **argv)
           return 0;
       }
+    if (disk.specification != LIBXL_DISK_SPECIFICATION_XEN) {
+        fprintf(stderr, "block-attach is only supported for specification 
xen\n");
This check prevents a previously working `block-attach` command line
from working.

      # xl -Tvvv block-attach 0 /dev/vg/guest_disk,raw,hda
      block-attach is only supported for specification xen

At least, that works by adding ",specification=xen", but it should work
without it as "xen" is the default (from the man page).
yes, you are right. thank you for pointing this out.


Maybe the check is done too soon, or maybe a better place to do it would
be in libxl.

libxl__device_disk_setdefault() is called much later, while executing
libxl_device_disk_add(), so `xl` can't use the default been done there
to "disk.specification".
I got it.


`xl block-attach` calls libxl_device_disk_add() which I think is only
called for hotplug of disk. If I recall correctly, libxl__add_disks() is
called instead at guest creation. So maybe it is possible to do
something in libxl_device_disk_add(), but that a function defined by a
macro, and the macro is using the same libxl__device_disk_add() that
libxl_device_disk_add(). On the other hand, there is a "hotplug"
parameter to libxl__device_disk_setdefault(), maybe that could be use?
Thank you for digging into the details here.

If I understood correctly your suggestion we simply can drop checks in
main_blockattach() (and likely main_blockdetach() ?) and add it to
libxl__device_disk_setdefault().


diff --git a/tools/libs/light/libxl_disk.c b/tools/libs/light/libxl_disk.c
index 9e82adb..96ace09 100644
--- a/tools/libs/light/libxl_disk.c
+++ b/tools/libs/light/libxl_disk.c
@@ -182,6 +182,11 @@ static int libxl__device_disk_setdefault(libxl__gc *gc,
uint32_t domid,
          disk->transport = LIBXL_DISK_TRANSPORT_MMIO;
      }

+    if (hotplug && disk->specification != LIBXL_DISK_SPECIFICATION_XEN) {
+        LOGD(ERROR, domid, "Hotplug is only supported for specification
xen");
Maybe check for `specification == VIRTIO` instead, and report that
hotplug isn't supported in virtio's case.

ok, will do



+        return ERROR_FAIL;
+    }
+
      /* Force Qdisk backend for CDROM devices of guests with a device model.
*/
      if (disk->is_cdrom != 0 &&
          libxl__domain_type(gc, domid) == LIBXL_DOMAIN_TYPE_HVM) {


Is my understanding correct?
Yes, that looks correct.


thank you for the confirmation



But I didn't look at the block-detach, and it seems that this part only
use generic functions to remove a disk. So there is no easy way to
prevent hotunplug from libxl. But `xl` does have access to a fully
initialised "disk" so can check the value of "specification", I guess
the check can stay in main_blockdetach().

ok, I got it


So, it works

root@generic-armv8-xt-dom0:~# xl block-attach DomU /dev/loop0,raw,xvda3,specification=virtio libxl: error: libxl_disk.c:186:libxl__device_disk_setdefault: Domain 2:Hotplug isn't supported for specification virtio libxl: error: libxl_device.c:1468:device_addrm_aocomplete: unable to add device
libxl_device_disk_add failed.

root@generic-armv8-xt-dom0:~# xl block-detach DomU 51713
Hotunplug isn't supported for specification virtio

**********

root@generic-armv8-xt-dom0:~# xl block-attach DomU /dev/loop0,raw,xvda3
[  364.656091] xen-blkback: backend/vbd/3/51715: using 4 queues, protocol 1 (arm-abi)



Cheers,

--
Regards,

Oleksandr Tyshchenko




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.