[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 00/20] VM forking



On Mon, Dec 30, 2019 at 11:43 AM Julien Grall <julien@xxxxxxx> wrote:
>
> Hi Tamas,
>
> On 30/12/2019 18:15, Tamas K Lengyel wrote:
> > On Mon, Dec 30, 2019 at 10:59 AM Roger Pau Monné <roger.pau@xxxxxxxxxx> 
> > wrote:
> >>
> >> On Thu, Dec 19, 2019 at 08:58:01AM -0700, Tamas K Lengyel wrote:
> >>> On Thu, Dec 19, 2019 at 2:48 AM Roger Pau Monné <roger.pau@xxxxxxxxxx> 
> >>> wrote:
> >>>>
> >>>> On Wed, Dec 18, 2019 at 11:40:37AM -0800, Tamas K Lengyel wrote:
> >>>>> The following series implements VM forking for Intel HVM guests to 
> >>>>> allow for
> >>>>> the fast creation of identical VMs without the assosciated high startup 
> >>>>> costs
> >>>>> of booting or restoring the VM from a savefile.
> >>>>>
> >>>>> JIRA issue: https://xenproject.atlassian.net/browse/XEN-89
> >>>>>
> >>>>> The main design goal with this series has been to reduce the time of 
> >>>>> creating
> >>>>> the VM fork as much as possible. To achieve this the VM forking process 
> >>>>> is
> >>>>> split into two steps:
> >>>>>      1) forking the VM on the hypervisor side;
> >>>>>      2) starting QEMU to handle the backed for emulated devices.
> >>>>>
> >>>>> Step 1) involves creating a VM using the new "xl fork-vm" command. The
> >>>>> parent VM is expected to remain paused after forks are created from it 
> >>>>> (which
> >>>>> is different then what process forking normally entails). During this 
> >>>>> forking
> >>>>                 ^ than
> >>>>> operation the HVM context and VM settings are copied over to the new 
> >>>>> forked VM.
> >>>>> This operation is fast and it allows the forked VM to be unpaused and 
> >>>>> to be
> >>>>> monitored and accessed via VMI. Note however that without its device 
> >>>>> model
> >>>>> running (depending on what is executing in the VM) it is bound to
> >>>>> misbehave/crash when its trying to access devices that would be 
> >>>>> emulated by
> >>>>> QEMU. We anticipate that for certain use-cases this would be an 
> >>>>> acceptable
> >>>>> situation, in case for example when fuzzing is performed of code 
> >>>>> segments that
> >>>>> don't access such devices.
> >>>>>
> >>>>> Step 2) involves launching QEMU to support the forked VM, which 
> >>>>> requires the
> >>>>> QEMU Xen savefile to be generated manually from the parent VM. This can 
> >>>>> be
> >>>>> accomplished simply by connecting to its QMP socket and issuing the
> >>>>> "xen-save-devices-state" command as documented by QEMU:
> >>>>> https://github.com/qemu/qemu/blob/master/docs/xen-save-devices-state.txt
> >>>>> Once the QEMU Xen savefile is generated the new "xl fork-launch-dm" 
> >>>>> command is
> >>>>> used to launch QEMU and load the specified savefile for it.
> >>>>
> >>>> IMO having two different commands is confusing for the end user, I
> >>>> would rather have something like:
> >>>>
> >>>> xl fork-vm [-d] ...
> >>>>
> >>>> Where '-d' would prevent forking any user-space emulators. I don't
> >>>> thinks there's a need for a separate command to fork the underlying
> >>>> user-space emulators.
> >>>
> >>> Keeping it as two commands allows you to start up the fork and let it
> >>> run immediately and only start up QEMU when you notice it is needed.
> >>> The idea being that you can monitor the kernel and see when it tries
> >>> to do some I/O that would require the QEMU backend. If you combine the
> >>> commands that option goes away.
> >>
> >> I'm not sure I see why, you could still provide a `xl fork-vm [-c]
> >> ...` that would just lunch a QEMU instance. End users using xl have
> >> AFAICT no way to tell whether or when a QEMU is needed or not, and
> >> hence the default behavior should be a fully functional one.
> >>
> >> IMO I think fork-vm without any options should do a complete fork of a
> >> VM, rather than a partial one without a device model clone.
> >
> > I understand your point but implementing that is outside the scope of
> > what we are doing right now. There are a lot more steps involved if
> > you want to create a fully functional VM fork with QEMU, for example
> > you also have to create a separate disk so you don't clobber the
> > parent VM's disk. Also, saving the QEMU device state is currently
> > hard-wired into the save/migration operation, so changing that
> > plumbing in libxl is quite involved. I actually found it way easier to
> > just write a script that connects to the socket and saves it to a
> > target file then going through the pain of adjusting libxl. So while
> > this could be implemented at this time it won't be.
> That's fine to not implement it right now, however the user interface
> should be able to cater it.
>
> In this case, I agree with Roger that it is more intuitive to think that
> fork means a complete fork, not a partial one.
>
> You could impose the user to always pass that option to not clone the
> device model and return an error if it is not there.

Just to be clear, I can add the option to the "fork-vm" command to
load the QEMU state with it, effectively combining the "fork-vm" and
"fork-launch-dm" into one. But I still need the separate
"fork-launch-dm" command since in our model we need to be able to
launch the VM and run it without QEMU for a while, only launching QEMU
when it is determined to be necessary. So if that's what you are
asking, sure, I can do that.

But keep in mind that the "fork-vm" command even with this update
would still not produce for you a "fully functional" VM on its own.
The user still has to produce a new VM config file, create the new
disk, save the QEMU state, etc. So if your concern is that the
"fork-vm" command's name will imply that it is going to be producing
fully functional VM on its own I would rather just rename the command
because by itself it will never create a fully functional VM.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.