[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v18 2/2] tools/libxl: VM forking toolstack side



Tamas K Lengyel writes ("[PATCH v18 2/2] tools/libxl: VM forking toolstack 
side"):
> Add necessary bits to implement "xl fork-vm" commands. The command allows the
> user to specify how to launch the device model allowing for a late-launch 
> model
> in which the user can execute the fork without the device model and decide to
> only later launch it.

Hi.

Sorry to be so late in reviewing this.  I will divert my main
attention to the API elements...

> +=item B<fork-vm> [I<OPTIONS>] I<domain-id>
> +
> +Create a fork of a running VM.  The domain will be paused after the operation
> +and remains paused while forks of it exist.

Do you mean "must remain paused" ?  And "The original domain" rather
than "The domain" ?

> +B<OPTIONS>
> +
> +=over 4
> +
> +=item B<-p>
> +
> +Leave the fork paused after creating it.

By default the fork runs right away, then, I take it.

> +=item B<--launch-dm>
> +
> +Specify whether the device model (QEMU) should be launched for the fork. Late
> +launch allows to start the device model for an already running fork.

It's not clear to me whether this launches the DM for an existing
fork, or specify when forking that the DM should be run ?

Do you really mean that you can run a fork for a while with no DM ?
How does that work ?

Also you seem to have not documented the launch-dm operation ?

> +=item B<-C>
> +
> +The config file to use when launching the device model.  Currently required 
> when
> +launching the device model.  Most config settings MUST match the parent 
> domain
> +exactly, only change VM name, disk path and network configurations.

This is a libxl config file, right ?

> +=item B<-Q>
> +
> +The path to the qemu save file to use when launching the device model.  
> Currently
> +required when launching the device model.

Where would the user get one of these ?

I think this question has no good answer and this reveals a problem
with the API...

> +=item B<--fork-reset>
> +
> +Perform a reset operation of an already running fork.  Note that resetting 
> may
> +be less performant then creating a new fork depending on how much memory the
> +fork has deduplicated during its runtime.

What is the semantic effect of a reset ?

> +=item B<--max-vcpus>
> +
> +Specify the max-vcpus matching the parent domain when not launching the dm.

What ?  This makes little sense to me.  You specify vm-fork
--max-vcpus and it changes the parent's max-vcpus ??

> +=item B<--allow-iommu>
> +
> +Specify to allow forking a domain that has IOMMU enabled. Only compatible 
> with
> +forks using --launch-dm no.

Are there no some complex implications here ?  Maybe this doc needs a
caveat.

> +int libxl_domain_fork_launch_dm(libxl_ctx *ctx, libxl_domain_config 
> *d_config,
> +                                uint32_t domid,
> +                                const libxl_asyncprogress_how 
> *aop_console_how)
> +                                LIBXL_EXTERNAL_CALLERS_ONLY;
> +
> +int libxl_domain_fork_reset(libxl_ctx *ctx, uint32_t domid)
> +                            LIBXL_EXTERNAL_CALLERS_ONLY;
>  #endif

I'm afraid I found the code very hard to review.  In particular:

> -    if (!soft_reset) {
> -        struct xen_domctl_createdomain create = {
> -            .ssidref = info->ssidref,
> -            .max_vcpus = b_info->max_vcpus,
> -            .max_evtchn_port = b_info->event_channels,
> -            .max_grant_frames = b_info->max_grant_frames,
> -            .max_maptrack_frames = b_info->max_maptrack_frames,

I think this containsw a lot of code motion.  There is probably some
other refactoring.


Can you please split this up into several patches ?  Code motion
should cocur in patches that do nothing else.  Refactoring should be
broken down into pieces as small as possible, and separated from the
addition of new functionality.  So most of the patches should be
annotated "no functional change".

Thanks,
Ian.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.