[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v9 6/9] libxl/xl: deprecate the build_info->cpumap field



On Wed, 2014-06-18 at 16:28 +0200, Dario Faggioli wrote:
> as, thanks to previous change ("libxl/xl: push VCPU affinity
> pinning down to libxl"), we now have an array of libxl_bitmap-s
> that can be used to transfer to libxl the vcpu (hard) affinity
> of each vcpu of the domain. Therefore, the cpumap field is no
> longer necessary: if we want all the vcpus to have the same
> affinity, we just put it in all the elements of the array.
> 
> This makes the libxl code simpler and easier to understand
> and maintain (only one place where to read the affinity), and
> does not complicate things much on the xl side, that is why
> we go for it.
> 
> Another benefit is that, by unifying the parsing (at the xl
> level) and the place where the information is consumed and the
> affinity are actually set (at the libxl level), it becomes
> possible to do things like:
> 
>   cpus = ["3-4", "2-6"]
> 
> meaning we want vcpu 0 to be pinned to pcpu 3,4 and vcpu 1 to
> be pinned to pcpu 2,3,4,5,6. Before this change, in fact, the
> list variant (["xx", "yy"]) supported only single values.
> (Of course, the old [2, 3] syntax, so no '"' continues to work,
> although it's not possible to specify ranges with it.)
> 
> IN SUMMARY, although it is still there, and it is still honoured,
> for backward compatibility reasons, the cpumap field in build_info
> should not be used any longer. The vcpu_hard_affinity array is
> what should be used instead.
> 
> Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
> ---
> Changes from v8:
>  * don't get rid of b_info->cpumap handling, so old apps
>    continue to work, as requested during review;
>  * changelog and code comments updated accordingly.
> ---
>  docs/man/xl.cfg.pod.5       |    8 +++---
>  tools/libxl/libxl_dom.c     |   10 ++++++-
>  tools/libxl/libxl_types.idl |    7 ++++-
>  tools/libxl/xl_cmdimpl.c    |   61 
> +++++++++++++++++--------------------------
>  4 files changed, 43 insertions(+), 43 deletions(-)
> 
> diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
> index c087cbc..af48622 100644
> --- a/docs/man/xl.cfg.pod.5
> +++ b/docs/man/xl.cfg.pod.5
> @@ -143,11 +143,11 @@ Combining this with "all" is also possible, meaning 
> "all,^nodes:1"
>  results in all the vcpus of the guest running on all the cpus on the
>  host, except for the cpus belonging to the host NUMA node 1.
>  
> -=item ["2", "3"] (or [2, 3])
> +=item ["2", "3-8,^5"]
>  
> -To ask for specific vcpu mapping. That means (in this example), vcpu #0
> -of the guest will run on cpu #2 of the host and vcpu #1 of the guest will
> -run on cpu #3 of the host.
> +To ask for specific vcpu mapping. That means (in this example), vcpu 0
> +of the guest will run on cpu 2 of the host and vcpu 1 of the guest will
> +run on cpus 3,4,6,7,8 of the host.

Why is deprecating a field in the libxl API changing the xl
configuration file syntax?

> @@ -261,6 +262,13 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
>              return rc;
>      }
>      libxl_domain_set_nodeaffinity(ctx, domid, &info->nodemap);
> +    /*
> +     * info->cpumap is DEPRECATED, but we still want old applications
> +     * that may be using it to continue working.
> +     */
> +    if (!libxl_bitmap_is_full(&info->cpumap))

The caller is expected to initialise this unused field to a non-default
state? That doesn't sound right. Did you mean !is_empty?

TBH I think you'd be better off just silently ignoring cpumap if the new
thing is set.

Or maybe converting the cpumap into the new array so the rest of the
libxl internals only needs to deal with one.

> +        LOG(WARN, "cpumap field of libxl_domain_build_info is DEPRECATED. "
> +                  "Please, use the vcpu_hard_affinity array instead");
>      libxl_set_vcpuaffinity_all(ctx, domid, info->max_vcpus,
>                                 &info->cpumap, NULL);
>  
> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> index 05978d7..0b3e4e9 100644
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -297,7 +297,12 @@ libxl_domain_sched_params = 
> Struct("domain_sched_params",[
>  libxl_domain_build_info = Struct("domain_build_info",[
>      ("max_vcpus",       integer),
>      ("avail_vcpus",     libxl_bitmap),
> -    ("cpumap",          libxl_bitmap),
> +    ("cpumap",          libxl_bitmap), # DEPRECATED!
> +    # The cpumap field above has been deprecated by the introduction of the
> +    # vcpu_hard_affinity array. It is not removed and it is still honoured, 
> for
> +    # API stability and backward compatibility reasons, but should not be 
> used
> +    # any longer. The vcpu_hard_affinity array is what should be used 
> instead,
> +    # to set the hard affinity of the various vCPUs.

This comment needs to talk about the precedence between the two fields
in the event that both are present.

WRT the structure of the series: All of the libxl deprecation stuff here
could be squashed into the previous patch which added the new field.
That would make more sense since otherwise you have a middle state where
both fields are present and valid and it is ill defined what is what.

All the xl stuff could then come next as a "move away from deprecated
interface" patch.

As it is each patch seems to do half of each thing. I'm not entirely
sure what the intermediate state is supposed to be.

> @@ -840,42 +839,30 @@ static void parse_config_data(const char *config_source,
>                  fprintf(stderr, "Unable to allocate cpumap for vcpu %d\n", 
> i);
>                  exit(1);
>              }
> -            libxl_bitmap_set_any(&b_info->vcpu_hard_affinity[i]);
> +            libxl_bitmap_set_none(&b_info->vcpu_hard_affinity[i]);

What do these sorts of changes have to do with the deprecation of
another field?

It looks to me like the previous patch has just done something wrong and
you are fixing it here for some reason.


Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.