[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v9 5/9] libxl/xl: push VCPU affinity pinning down to libxl



On mer, 2014-06-18 at 16:28 +0200, Dario Faggioli wrote:
> From: Wei Liu <wei.liu2@xxxxxxxxxx>
> 
Oh and, Wei, one more thing about this patch.

While preparing v10, I happen to have made quite a few changes to the
patch itself. It still accomplishes  its goal, and its still based on
your one, of course, but it does look a bit different from it now.

I'm attaching it to this email, let me know what you want me to do about
authorship. I'm totally fine with keeping the `From: Wei Liu' tag, but
I'll keep it only if you also are... I don't want you to be blamed for
my bugs! :-P

To have an idea of what happened, have a look at the 'Changes from v9'
section. It's nothing too mind-blowing at all, but the code does look
different.

[sorry for the kind-of top-posting. I'm leaving the content of the
original e-mail here, so that one can easily compare the old (here in
the message body) and new (attached) look of the patch]

Regards,
Dario

> This patch introduces an array of libxl_bitmap called "vcpu_hard_affinity"
> in libxl IDL to preserve VCPU to PCPU mapping. This is necessary for libxl
> to preserve all information to construct a domain.
> 
> Also define LIBXL_HAVE_BUILDINFO_VCPU_HARD_AFFINITY_ARRAY in libxl.h to
> mark the change in API.
> 
> This patch was originally part of Wei's series about pushing as much
> information as possible on domain configuration in libxl, rather than
> xl. See here, for more details:
>   http://lists.xen.org/archives/html/xen-devel/2014-06/msg01026.html
>   http://lists.xen.org/archives/html/xen-devel/2014-06/msg01031.html
> 
> Signed-off-by: Wei Liu <wei.liu2@xxxxxxxxxx>
> Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>>
> ---
> Changes from v8:
>  * type in the changelog.
> 
> Changes from v7:
>  This patch is new in the series, and it is coming from Wei's series
>  about pushing domain config information down to libxl. It is being
>  incorporated in this series to reduce as much as possible the
>  inter-dependencies between the two patch series, i.e., for ease
>  of development and review.
> ---
>  tools/libxl/libxl.h         |   15 ++++++++++
>  tools/libxl/libxl_dom.c     |   14 +++++++++
>  tools/libxl/libxl_types.idl |    1 +
>  tools/libxl/xl_cmdimpl.c    |   65 
> ++++++++++++-------------------------------
>  4 files changed, 48 insertions(+), 47 deletions(-)
> 
> diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
> index c2b143b..663abe2 100644
> --- a/tools/libxl/libxl.h
> +++ b/tools/libxl/libxl.h
> @@ -338,6 +338,21 @@
>  #endif
>  
>  /*
> + * LIBXL_HAVE_BUILDINFO_VCPU_HARD_AFFINITY_ARRAY
> + *
> + * If this is defined, then libxl_domain_build_info structure will
> + * contain vcpu_hard_affinity, an array of libxl_bitmap that contains
> + * the necessary information to set the hard affinity of each VCPU to
> + * a set of PCPUs. Libxl will try to pin VCPUs to PCPUs according to
> + * this list.
> + *
> + * The number of libxl_bitmap in the array equals to the maximum number
> + * of VCPUs. The size of each bitmap is computed basing on the maximum
> + * number of PCPUs.
> + */
> +#define LIBXL_HAVE_BUILDINFO_VCPU_HARD_AFFINITY_ARRAY 1
> +
> +/*
>   * LIBXL_HAVE_BUILDINFO_USBDEVICE_LIST
>   *
>   * If this is defined, then the libxl_domain_build_info structure will
> diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
> index a90a8d5..484ad84 100644
> --- a/tools/libxl/libxl_dom.c
> +++ b/tools/libxl/libxl_dom.c
> @@ -264,6 +264,20 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
>      libxl_set_vcpuaffinity_all(ctx, domid, info->max_vcpus,
>                                 &info->cpumap, NULL);
>  
> +    /* If we have the vcpu hard affinity list, honour it */
> +    if (info->num_vcpu_hard_affinity) {
> +        int i;
> +
> +        for (i = 0; i < info->num_vcpu_hard_affinity; i++) {
> +            if (libxl_set_vcpuaffinity(ctx, domid, i,
> +                                       &info->vcpu_hard_affinity[i],
> +                                       NULL)) {
> +                LOG(ERROR, "setting affinity failed on vcpu `%d'", i);
> +                return ERROR_FAIL;
> +            }
> +        }
> +    }
> +
>      if (xc_domain_setmaxmem(ctx->xch, domid, info->target_memkb +
>          LIBXL_MAXMEM_CONSTANT) < 0) {
>          LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "Couldn't set max memory");
> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> index 2da58e2..05978d7 100644
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -299,6 +299,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
>      ("avail_vcpus",     libxl_bitmap),
>      ("cpumap",          libxl_bitmap),
>      ("nodemap",         libxl_bitmap),
> +    ("vcpu_hard_affinity", Array(libxl_bitmap, "num_vcpu_hard_affinity")),
>      ("numa_placement",  libxl_defbool),
>      ("tsc_mode",        libxl_tsc_mode),
>      ("max_memkb",       MemKB),
> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> index 516a4d7..ac603c8 100644
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -88,9 +88,6 @@ xlchild children[child_max];
>  static const char *common_domname;
>  static int fd_lock = -1;
>  
> -/* Stash for specific vcpu to pcpu mappping */
> -static int *vcpu_to_pcpu;
> -
>  static const char savefileheader_magic[32]=
>      "Xen saved domain, xl format\n \0 \r";
>  
> @@ -825,6 +822,7 @@ static void parse_config_data(const char *config_source,
>          b_info->max_vcpus = l;
>  
>      if (!xlu_cfg_get_list (config, "cpus", &cpus, 0, 1)) {
> +        b_info->num_vcpu_hard_affinity = b_info->max_vcpus;
>          int n_cpus = 0;
>  
>          if (libxl_cpu_bitmap_alloc(ctx, &b_info->cpumap, 0)) {
> @@ -832,21 +830,19 @@ static void parse_config_data(const char *config_source,
>              exit(1);
>          }
>  
> -        /* Prepare the array for single vcpu to pcpu mappings */
> -        vcpu_to_pcpu = xmalloc(sizeof(int) * b_info->max_vcpus);
> -        memset(vcpu_to_pcpu, -1, sizeof(int) * b_info->max_vcpus);
> +        b_info->vcpu_hard_affinity =
> +            xmalloc(b_info->num_vcpu_hard_affinity * sizeof(libxl_bitmap));
> +
> +        for (i = 0; i < b_info->num_vcpu_hard_affinity; i++) {
> +            libxl_bitmap_init(&b_info->vcpu_hard_affinity[i]);
> +            if (libxl_cpu_bitmap_alloc(ctx,
> +                                       &b_info->vcpu_hard_affinity[i], 0)) {
> +                fprintf(stderr, "Unable to allocate cpumap for vcpu %d\n", 
> i);
> +                exit(1);
> +            }
> +            libxl_bitmap_set_any(&b_info->vcpu_hard_affinity[i]);
> +        }
>  
> -        /*
> -         * Idea here is to let libxl think all the domain's vcpus
> -         * have cpu affinity with all the pcpus on the list.
> -         * It is then us, here in xl, that matches each single vcpu
> -         * to its pcpu (and that's why we need to stash such info in
> -         * the vcpu_to_pcpu array now) after the domain has been created.
> -         * Doing it like this saves the burden of passing to libxl
> -         * some big array hosting the single mappings. Also, using
> -         * the cpumap derived from the list ensures memory is being
> -         * allocated on the proper nodes anyway.
> -         */
>          libxl_bitmap_set_none(&b_info->cpumap);
>          while ((buf = xlu_cfg_get_listitem(cpus, n_cpus)) != NULL) {
>              i = atoi(buf);
> @@ -855,12 +851,14 @@ static void parse_config_data(const char *config_source,
>                  exit(1);
>              }
>              libxl_bitmap_set(&b_info->cpumap, i);
> -            if (n_cpus < b_info->max_vcpus)
> -                vcpu_to_pcpu[n_cpus] = i;
> +            if (n_cpus < b_info->max_vcpus) {
> +                libxl_bitmap_set_none(&b_info->vcpu_hard_affinity[n_cpus]);
> +                libxl_bitmap_set(&b_info->vcpu_hard_affinity[n_cpus], i);
> +            }
>              n_cpus++;
>          }
>  
> -        /* We have a cpumap, disable automatic placement */
> +        /* We have a list of cpumaps, disable automatic placement */
>          libxl_defbool_set(&b_info->numa_placement, false);
>      }
>      else if (!xlu_cfg_get_string (config, "cpus", &buf, 0)) {
> @@ -2217,33 +2215,6 @@ start:
>      if ( ret )
>          goto error_out;
>  
> -    /* If single vcpu to pcpu mapping was requested, honour it */
> -    if (vcpu_to_pcpu) {
> -        libxl_bitmap vcpu_cpumap;
> -
> -        ret = libxl_cpu_bitmap_alloc(ctx, &vcpu_cpumap, 0);
> -        if (ret)
> -            goto error_out;
> -        for (i = 0; i < d_config.b_info.max_vcpus; i++) {
> -
> -            if (vcpu_to_pcpu[i] != -1) {
> -                libxl_bitmap_set_none(&vcpu_cpumap);
> -                libxl_bitmap_set(&vcpu_cpumap, vcpu_to_pcpu[i]);
> -            } else {
> -                libxl_bitmap_set_any(&vcpu_cpumap);
> -            }
> -            if (libxl_set_vcpuaffinity(ctx, domid, i, &vcpu_cpumap, NULL)) {
> -                fprintf(stderr, "setting affinity failed on vcpu `%d'.\n", 
> i);
> -                libxl_bitmap_dispose(&vcpu_cpumap);
> -                free(vcpu_to_pcpu);
> -                ret = ERROR_FAIL;
> -                goto error_out;
> -            }
> -        }
> -        libxl_bitmap_dispose(&vcpu_cpumap);
> -        free(vcpu_to_pcpu); vcpu_to_pcpu = NULL;
> -    }
> -
>      ret = libxl_userdata_store(ctx, domid, "xl",
>                                      config_data, config_len);
>      if (ret) {
> 

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: libxl-xl-vcpu-1to1-maping-in-libxl_from-Wei.patch
Description: Text Data

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.