[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Re: [Patch] update cpumask handling for cpu pools in libxc and python


  • To: Ian Campbell <Ian.Campbell@xxxxxxxxxxxxx>
  • From: Juergen Gross <juergen.gross@xxxxxxxxxxxxxx>
  • Date: Fri, 17 Sep 2010 11:20:42 +0200
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Fri, 17 Sep 2010 02:21:44 -0700
  • Domainkey-signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns; h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV: Received:Received:Message-ID:Date:From:Organization: User-Agent:MIME-Version:To:CC:Subject:References: In-Reply-To:Content-Type; b=hJxKNKTgHG7grcza4P7KqLcsD5OpaSbXr1HTd5i6sXyr3cVb0FWlU1ZV ixTrD+EkSwRnNhhq18YBKdvThg5H6sqqxlQu9YAzWqWedJBZaGP03ozAJ X34oYiw1n4no2YvbK7NQgF6erNYN6FR77E1vlWqZcHpRqcFX1tDrz2XUM hyN69Qf/NlRCVS9YXnHAi88kwBK8oqDTznDTVKebHMxsN2gerPg/AXw7p 6b6KIUK8bbDPXp4xWS/wsO7K66/xc;
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

On 09/17/10 11:00, Ian Campbell wrote:
On Fri, 2010-09-17 at 06:51 +0100, Juergen Gross wrote:
Signed-off-by: juergen.gross@xxxxxxxxxxxxxx

To be able to support arbitrary numbers of physical cpus it was
necessary to
include the size of cpumaps in the xc-interfaces for cpu pools.
These were:
   definition of xc_cpupoolinfo_t
   xc_cpupool_getinfo()
   xc_cpupool_freeinfo()

diff -r d978675f3d53 tools/libxc/xc_cpupool.c
--- a/tools/libxc/xc_cpupool.c  Thu Sep 16 18:29:26 2010 +0100
+++ b/tools/libxc/xc_cpupool.c  Fri Sep 17 07:42:30 2010 +0200
@@ -34,6 +34,20 @@ static int do_sysctl_save(xc_interface *
      return ret;
  }

+static int get_cpumap_size(xc_interface *xch)
+{
+    static int cpumap_size = 0;
+    xc_physinfo_t physinfo;
+
+    if ( cpumap_size )
+        return cpumap_size;
+
+    if ( !xc_physinfo(xch,&physinfo) )
+        cpumap_size = (physinfo.max_cpu_id + 8) / 8;

The + 8 / 8 thing still jumps out at me every time I look at it. Perhaps
the intention would be clearer if written as:
        int nr_cpus = physinfo.max_cpu_id + 1;
        cpumap_size = (nr+cpus + 7) / 8;

I modified it to make it more clear.

If xc_physinfo fails (which seems like a reasonably fatal type of error)
then this function returns 0 but the caller does not seem to check for
this case.

Okay, test added.


+xc_cpupoolinfo_t *xc_cpupool_getinfo(xc_interface *xch,
+                       uint32_t poolid)
  {
[...]
+    local_size = get_cpumap_size(xch);
+    cpumap_size = (local_size + 7) / 8;

local_size has already been rounded up in get_cpumap_size. Do we really
need to do it again?

I've made it more clear that this is rounding to uint64.


+    size = sizeof(xc_cpupoolinfo_t) + cpumap_size * 8 + local_size;

Why do we need both "cpumap_size * 8" and local_size additional bytes
here? Both contain the number of bytes necessary to contain a cpumap
bitmask and in fact I suspect they are both equal at this point (see
point about rounding above).

The hypervisor returns a cpumask based on bytes, the tools use uint64-based
cpumasks. In practice this is equivalent as long as multiple of 8 bytes are
written by the hypervisor and the system is running little endian.
I prefer a clean interface mapping here.


Juergen

--
Juergen Gross                 Principal Developer Operating Systems
TSP ES&S SWE OS6                       Telephone: +49 (0) 89 3222 2967
Fujitsu Technology Solutions              e-mail: juergen.gross@xxxxxxxxxxxxxx
Domagkstr. 28                           Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html

Attachment: cpupool-tools-cpumask.patch
Description: Text Data

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.