WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: [Patch] update cpumask handling for cpu pools in libxc a

To: Juergen Gross <juergen.gross@xxxxxxxxxxxxxx>
Subject: [Xen-devel] Re: [Patch] update cpumask handling for cpu pools in libxc and python
From: Ian Campbell <Ian.Campbell@xxxxxxxxxxxxx>
Date: Fri, 17 Sep 2010 10:00:37 +0100
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Fri, 17 Sep 2010 02:01:39 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4C9301DB.4050009@xxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: Citrix Systems, Inc.
References: <4C9301DB.4050009@xxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
On Fri, 2010-09-17 at 06:51 +0100, Juergen Gross wrote:
> Signed-off-by: juergen.gross@xxxxxxxxxxxxxx
> 
> To be able to support arbitrary numbers of physical cpus it was
> necessary to
> include the size of cpumaps in the xc-interfaces for cpu pools.
> These were:
>   definition of xc_cpupoolinfo_t
>   xc_cpupool_getinfo()
>   xc_cpupool_freeinfo()
> 
> diff -r d978675f3d53 tools/libxc/xc_cpupool.c
> --- a/tools/libxc/xc_cpupool.c  Thu Sep 16 18:29:26 2010 +0100
> +++ b/tools/libxc/xc_cpupool.c  Fri Sep 17 07:42:30 2010 +0200
> @@ -34,6 +34,20 @@ static int do_sysctl_save(xc_interface *
>      return ret;
>  }
>  
> +static int get_cpumap_size(xc_interface *xch)
> +{
> +    static int cpumap_size = 0;
> +    xc_physinfo_t physinfo;
> +
> +    if ( cpumap_size )
> +        return cpumap_size;
> +
> +    if ( !xc_physinfo(xch, &physinfo) )
> +        cpumap_size = (physinfo.max_cpu_id + 8) / 8;

The + 8 / 8 thing still jumps out at me every time I look at it. Perhaps
the intention would be clearer if written as:
        int nr_cpus = physinfo.max_cpu_id + 1;
        cpumap_size = (nr+cpus + 7) / 8;
??

If xc_physinfo fails (which seems like a reasonably fatal type of error)
then this function returns 0 but the caller does not seem to check for
this case.

> +xc_cpupoolinfo_t *xc_cpupool_getinfo(xc_interface *xch, 
> +                       uint32_t poolid)
>  {
> [...]
> +    local_size = get_cpumap_size(xch);
> +    cpumap_size = (local_size + 7) / 8;

local_size has already been rounded up in get_cpumap_size. Do we really
need to do it again?

> +    size = sizeof(xc_cpupoolinfo_t) + cpumap_size * 8 + local_size;

Why do we need both "cpumap_size * 8" and local_size additional bytes
here? Both contain the number of bytes necessary to contain a cpumap
bitmask and in fact I suspect they are both equal at this point (see
point about rounding above).

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel