[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH 4/4] xen-blkback: support dynamic unbind/bind



By simply re-attaching to shared rings during connect_ring() rather than
assuming they are freshly allocated (i.e assuming the counters are zero)
it is possible for vbd instances to be unbound and re-bound from and to
(respectively) a running guest.

This has been tested by running:

while true; do dd if=/dev/urandom of=test.img bs=1M count=1024; done

in a PV guest whilst running:

while true;
  do echo vbd-$DOMID-$VBD >unbind;
  echo unbound;
  sleep 5;
  echo vbd-$DOMID-$VBD >bind;
  echo bound;
  sleep 3;
  done

in dom0 from /sys/bus/xen-backend/drivers/vbd to continuously unbind and
re-bind its system disk image.

This is a highly useful feature for a backend module as it allows it to be
unloaded and re-loaded (i.e. updated) without requiring domUs to be halted.
This was also tested by running:

while true;
  do echo vbd-$DOMID-$VBD >unbind;
  echo unbound;
  sleep 5;
  rmmod xen-blkback;
  echo unloaded;
  sleep 1;
  modprobe xen-blkback;
  echo bound;
  cd $(pwd);
  sleep 3;
  done

in dom0 whilst running the same loop as above in the (single) PV guest.

Some (less stressful) testing has also been done using a Windows HVM guest
with the latest 9.0 PV drivers installed.

Signed-off-by: Paul Durrant <pdurrant@xxxxxxxxxx>
---
Cc: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Cc: "Roger Pau Monné" <roger.pau@xxxxxxxxxx>
Cc: Jens Axboe <axboe@xxxxxxxxx>
Cc: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
Cc: Juergen Gross <jgross@xxxxxxxx>
Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>
---
 drivers/block/xen-blkback/xenbus.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/block/xen-blkback/xenbus.c 
b/drivers/block/xen-blkback/xenbus.c
index e8c5c54e1d26..0b82740c4a9d 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -196,24 +196,24 @@ static int xen_blkif_map(struct xen_blkif_ring *ring, 
grant_ref_t *gref,
        {
                struct blkif_sring *sring;
                sring = (struct blkif_sring *)ring->blk_ring;
-               BACK_RING_INIT(&ring->blk_rings.native, sring,
-                              XEN_PAGE_SIZE * nr_grefs);
+               BACK_RING_ATTACH(&ring->blk_rings.native, sring,
+                                XEN_PAGE_SIZE * nr_grefs);
                break;
        }
        case BLKIF_PROTOCOL_X86_32:
        {
                struct blkif_x86_32_sring *sring_x86_32;
                sring_x86_32 = (struct blkif_x86_32_sring *)ring->blk_ring;
-               BACK_RING_INIT(&ring->blk_rings.x86_32, sring_x86_32,
-                              XEN_PAGE_SIZE * nr_grefs);
+               BACK_RING_ATTACH(&ring->blk_rings.x86_32, sring_x86_32,
+                                XEN_PAGE_SIZE * nr_grefs);
                break;
        }
        case BLKIF_PROTOCOL_X86_64:
        {
                struct blkif_x86_64_sring *sring_x86_64;
                sring_x86_64 = (struct blkif_x86_64_sring *)ring->blk_ring;
-               BACK_RING_INIT(&ring->blk_rings.x86_64, sring_x86_64,
-                              XEN_PAGE_SIZE * nr_grefs);
+               BACK_RING_ATTACH(&ring->blk_rings.x86_64, sring_x86_64,
+                                XEN_PAGE_SIZE * nr_grefs);
                break;
        }
        default:
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.