WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] [PATCH] xend: fix best NUMA node allocation

To: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Subject: [Xen-devel] [PATCH] xend: fix best NUMA node allocation
From: Andre Przywara <andre.przywara@xxxxxxx>
Date: Thu, 15 Apr 2010 16:05:50 +0200
Cc: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Thu, 15 Apr 2010 07:11:55 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 2.0.0.21 (X11/20090329)
Hi,

since we moved several NUMA info fields from physinfo into separate functions/structures, we must adapt the node picking algorithm, too.
Currently xm create complains about undefined hash values.
The patch uses the new Python xc binding to get the information and create a reverse mapping for node_to_cpu, since we now only have a cpu_to_node field.

Signed-off-by: Andre Przywara <andre.przywara@xxxxxxx>

--
Andre Przywara
AMD-Operating System Research Center (OSRC), Dresden, Germany
Tel: +49 351 448-3567-12
diff -r 2c2591185f8c tools/python/xen/xend/XendDomainInfo.py
--- a/tools/python/xen/xend/XendDomainInfo.py   Thu Apr 15 13:16:17 2010 +0100
+++ b/tools/python/xen/xend/XendDomainInfo.py   Thu Apr 15 15:54:41 2010 +0200
@@ -2711,7 +2711,7 @@
         else:
             def find_relaxed_node(node_list):
                 import sys
-                nr_nodes = info['max_node_id']+1
+                nr_nodes = info['max_node_index'] + 1
                 if node_list is None:
                     node_list = range(0, nr_nodes)
                 nodeload = [0]
@@ -2724,35 +2724,40 @@
                         if sxp.child_value(vcpu, 'online') == 0: continue
                         cpumap = list(sxp.child_value(vcpu,'cpumap'))
                         for i in range(0, nr_nodes):
-                            node_cpumask = info['node_to_cpu'][i]
+                            node_cpumask = node_to_cpu[i]
                             for j in node_cpumask:
                                 if j in cpumap:
                                     nodeload[i] += 1
                                     break
                 for i in range(0, nr_nodes):
-                    if len(info['node_to_cpu'][i]) == 0:
+                    if len(node_to_cpu[i]) == 0:
                         nodeload[i] += 8
                     else:
-                        nodeload[i] = int(nodeload[i] * 16 / 
len(info['node_to_cpu'][i]))
+                        nodeload[i] = int(nodeload[i] * 16 / 
len(node_to_cpu[i]))
                         if i not in node_list:
                             nodeload[i] += 8
                 return map(lambda x: x[0], sorted(enumerate(nodeload), 
key=lambda x:x[1]))
 
-            info = xc.physinfo()
-            if info['nr_nodes'] > 1:
-                node_memory_list = info['node_to_memory']
+            info = xc.numainfo()
+            if info['max_node_index'] > 0:
+                node_memory_list = info['node_memfree']
+                node_to_cpu = []
+                for i in range(0, info['max_node_index'] + 1):
+                    node_to_cpu.append([])
+                for cpu, node in enumerate(xc.topologyinfo()['cpu_to_node']):
+                    node_to_cpu[node].append(cpu)
                 needmem = 
self.image.getRequiredAvailableMemory(self.info['memory_dynamic_max']) / 1024
                 candidate_node_list = []
-                for i in range(0, info['max_node_id']+1):
-                    if node_memory_list[i] >= needmem and 
len(info['node_to_cpu'][i]) > 0:
+                for i in range(0, info['max_node_index'] + 1):
+                    if node_memory_list[i] >= needmem and len(node_to_cpu[i]) 
> 0:
                         candidate_node_list.append(i)
                 best_node = find_relaxed_node(candidate_node_list)[0]
-                cpumask = info['node_to_cpu'][best_node]
-                best_nodes = find_relaxed_node(filter(lambda x: x != 
best_node, range(0,info['max_node_id']+1)))
+                cpumask = node_to_cpu[best_node]
+                best_nodes = find_relaxed_node(filter(lambda x: x != 
best_node, range(0,info['max_node_index']+1)))
                 for node_idx in best_nodes:
                     if len(cpumask) >= self.info['VCPUs_max']:
                         break
-                    cpumask = cpumask + info['node_to_cpu'][node_idx]
+                    cpumask = cpumask + node_to_cpu[node_idx]
                     log.debug("allocating additional NUMA node %d", node_idx)
                 for v in range(0, self.info['VCPUs_max']):
                     xc.vcpu_setaffinity(self.domid, v, cpumask)
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>
  • [Xen-devel] [PATCH] xend: fix best NUMA node allocation, Andre Przywara <=