[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 00/20] VM forking



On Thu, Jan 9, 2020 at 2:48 AM Roger Pau Monné <roger.pau@xxxxxxxxxx> wrote:
>
> On Wed, Jan 08, 2020 at 12:51:35PM -0700, Tamas K Lengyel wrote:
> > On Wed, Jan 8, 2020 at 11:37 AM Roger Pau Monné <roger.pau@xxxxxxxxxx> 
> > wrote:
> > >
> > > On Wed, Jan 08, 2020 at 11:14:46AM -0700, Tamas K Lengyel wrote:
> > > > On Wed, Jan 8, 2020 at 11:01 AM Roger Pau Monné <roger.pau@xxxxxxxxxx> 
> > > > wrote:
> > > > >
> > > > > On Wed, Jan 08, 2020 at 08:32:22AM -0700, Tamas K Lengyel wrote:
> > > > > > On Wed, Jan 8, 2020 at 8:08 AM Roger Pau Monné 
> > > > > > <roger.pau@xxxxxxxxxx> wrote:
> > > > > > > I think you also need something like:
> > > > > > >
> > > > > > > # xl fork-vm --launch-dm late <parent_domid> <fork_domid>
> > > > > > >
> > > > > > > So that a user doesn't need to pass a qemu-save-file?
> > > > > >
> > > > > > This doesn't make much sense to me. To launch QEMU you need the 
> > > > > > config
> > > > > > file to wire things up correctly. Like in order to launch QEMU you
> > > > > > need to tell it the name of the VM, disk path, etc. that are all
> > > > > > contained in the config.
> > > > >
> > > > > You could get all this information from the parent VM, IIRC libxl has
> > > > > a json version of the config. For example for migration there's no
> > > > > need to pass any config file, since the incoming VM can be recreated
> > > > > from the data in the source VM.
> > > > >
> > > >
> > > > But again, creating a fork with the exact config of the parent is not
> > > > possible. Even if the tool would rename the fork on-the-fly as it does
> > > > during the migration, the fork would end up thrashing the parent VM's
> > > > disk and making it impossible to create any additional forks. It would
> > > > also mean that at no point can the original VM be unpaused after the
> > > > forks are gone. I don't see any usecase in which that would make any
> > > > sense at all.
> > >
> > > You could have the disk(s) as read-only and the VM running completely
> > > from RAM. Alpine-linux has (or had) a mode where it was completely
> > > stateless and running from RAM. I think it's fine to require passing a
> > > config file for the time being, we can look at other options
> > > afterwards.
> > >
> >
> > OK, there is that. But I would say that's a fairly niche use-case. You
> > wouldn't have any network access in that fork, no disk, no way to get
> > information in or out beside the serial console.
>
> Why won't the fork have network access?

If you have multiple forks you end up having MAC-address collision. I
don't see what would be the point of creating a single fork when the
parent remains paused - you could just keep running the parent since
you aren't gaining anything by creating the fork. The main reason to
create a fork would be to create multiples of them.

>
> If the parent VM is left paused the fork should behave like a local
> migration regarding network access, and thus be fully functional.
>
> > So I wouldn't want
> > that setup to be considered the default. If someone wants to that I
> > would rather have an option that tells xl to automatically name the
> > fork for you instead of the other way around.
>
> Ack, I just want to make sure that whatever interface we end up using
> is designed taking into account other use cases apart from the one at
> hand.
>
> On an unrelated note, does forking work when using PV interfaces?

As I recall yes, but In my Linux tests these were the config options I
tested and work with the fork. Not sure if the vif device by default
is PV or emulated:

vnc=1
vnclisten="0.0.0.0:1"

usb=1
usbdevice=['tablet']

disk = ['phy:/dev/t0vg/debian-stretch,xvda,w']
vif = ['bridge=xenbr0,mac=00:07:5B:BB:00:01']

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.