[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] libxl: extract and save affinity maps from hypervisor



Wei Liu writes ("[PATCH] libxl: extract and save affinity maps from 
hypervisor"):
> This is required to retain affinity setting across save/restore and
> migration.

Does the hypervisor or libxl invent a default affinity map at some
point ?  If so that default calculation needs to be re-done for the
new host.  ISTR some code to do NUMA affinity stuff by default.  CCing
Dario who will hopefully save me digging into the code to answer that
question :-).

> diff --git a/tools/libxl/libxl_domain.c b/tools/libxl/libxl_domain.c
> index 3377bba..24fac9b 100644
> --- a/tools/libxl/libxl_domain.c
> +++ b/tools/libxl/libxl_domain.c
> @@ -1603,16 +1603,18 @@ int libxl_retrieve_domain_configuration(libxl_ctx 
> *ctx, uint32_t domid,
>  
>      /* VCPUs */
>      {

[reordered here:]

> +        libxl_domain_build_info *b_info = &d_config->b_info;

> -        libxl_bitmap *map = &d_config->b_info.avail_vcpus;
> +        libxl_bitmap *map = &b_info->avail_vcpus;
> -        unsigned int max_vcpus = d_config->b_info.max_vcpus;
> +        unsigned int max_vcpus = b_info->max_vcpus;

Can you please split out this, and the changes inside the function,
into a no-functional-change pre-patch ?  I got quite confused for a
moment by these changes mixed in with the others.

> +        /* Affinity maps */
> +
> +#define REALLOC_AFFINITY_MAP(n)                                              
>  \
> +        for (i = 0; i < b_info->num_vcpu_ ## n ## _affinity; i++) {          
>  \
> +            libxl_bitmap *m = &b_info->vcpu_ ## n ## _affinity[i];           
>  \
> +            libxl_bitmap_dispose(m);                                         
>  \
> +            libxl_bitmap_init(m);                                            
>  \
> +            libxl_cpu_bitmap_alloc(CTX, m, 0);                               
>  \
> +        }                                                                    
>  \
> +        b_info->vcpu_ ## n ## _affinity =                                    
>  \
> +            libxl__realloc(NOGC, b_info->vcpu_ ## n ## _affinity,            
>  \
> +                    max_vcpus * sizeof(b_info->vcpu_ ## n ## _affinity[0])); 
>  \
> +        for (i = b_info->num_vcpu_ ## n ## _affinity; i < max_vcpus; i++) {  
>  \
> +            libxl_bitmap *m = &b_info->vcpu_ ## n ## _affinity[i];           
>  \
> +            libxl_bitmap_init(m);                                            
>  \
> +            libxl_cpu_bitmap_alloc(CTX, m, 0);                               
>  \
> +        }                                                                    
>  \
> +        b_info->num_vcpu_ ## n ## _affinity = max_vcpus;
> +
> +        REALLOC_AFFINITY_MAP(hard);
> +        REALLOC_AFFINITY_MAP(soft);

I don't think this needs to be a macro, does it ?  I mean, there's
just the one member _affinity.  You could make this into a function.

If you do keep it as a macro: `n' is a poor name for what I would call
`which' or `softhard' or something.  And I prefer no spaces around ##
(and that seems to be the dominant style in libxl).

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.