[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC] Live migration for VMs with QEMU backed local storage



On Fri, Jun 23, 2017 at 03:42:20AM -0400, Bruno Alvisio wrote:
> This patch is the first attempt on adding live migration of instances with 
> local
> storage to Xen. This patch just handles very restricted case of fully
> virtualized HVMs. The code uses the "drive-mirror" capability provided by 
> QEMU.
> A new "-l" option is introduced to "xl migrate" command. If provided, the 
> local
> disk should be mirrored during the migration process. If the option is set,
> during the VM creation a qemu NBD server is started on the destination. After
> the instance is suspended on the source, the QMP "disk-mirror" command is 
> issued
> to mirror the disk to destination. Once the mirroring job is complete, the
> migration process continues as before. Finally, the NBD server is stopped 
> after
> the instance is successfully resumed on the destination node.

Since I'm not familiar with all this, can this "driver-mirror" QEMU
capability handle the migration of disk while being actively used?

> A major problem with this patch is that the mirroring of the disk is performed
> only after the memory stream is completed and the VM is suspended on the 
> source;
> thus the instance is frozen for a long period of time. The reason this happens
> is that the QEMU process (needed for the disk mirroring) is started on the
> destination node only after the memory copying is completed. One possibility I
> was considering to solve this issue (if it is decided that this capability
> should be used): Could a "helper" QEMU process be started on the destination
> node at the beginning of the migration sequence with the sole purpose of
> handling the disk mirroring and kill it at the end of the migration sequence? 
> 
> From the suggestions given by Konrad Wilk and Paul Durrant the preferred
> approach would be to handle the mirroring of disks by QEMU instead of directly
> being handled directly by, for example, blkback. It would be very helpful for 
> me
> to have a mental map of all the scenarios that can be encountered regarding
> local disk (Xen could start supporting live migration of certain types of 
> local
> disks). This are the ones I can think of:
> - Fully Virtualized HVM: QEMU emulation

PV domains can also use the QEMU PV disk backend, so it should be
feasible to handle this migration for all guest types just using
QEMU.

> - blkback

TBH, I don't think such feature should be added to blkback. It's
too complex to be implemented inside of the kernel itself.

There are options already available to perform block device
duplication at the block level itself in Linux like DRDB [0] and IMHO
this is what should be used in conjunction with blkback.

Remember that at the end of day the Unix philosophy has always been to
implement simple tools that solve specific problems, and then glue
them together in order to solve more complex problems.

In that line of thought, why not simply use iSCSI or similar in order
to share the disk with all the hosts?

> - blktap / blktap2 

This is deprecated and no longer present in upstream kernels, I don't
think it's worth looking into it.

Roger.

[0] http://docs.linbit.com/

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.