[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [XEN RFC PATCH 25/40] xen/arm: unified entry to parse all NUMA data from device tree



Hi Stefano,

On 31/08/2021 01:54, Stefano Stabellini wrote:
On Wed, 11 Aug 2021, Wei Chen wrote:
In this API, we scan whole device tree to parse CPU node id, memory
node id and distance-map. Though early_scan_node will invoke has a
handler to process memory nodes. If we want to parse memory node id
in this handler, we have to embeded NUMA parse code in this handler.
But we still need to scan whole device tree to find CPU NUMA id and
distance-map. In this case, we include memory NUMA id parse in this
API too. Another benefit is that we have a unique entry for device
tree NUMA data parse.

Signed-off-by: Wei Chen <wei.chen@xxxxxxx>
---
  xen/arch/arm/numa_device_tree.c | 31 ++++++++++++++++++++++++++++---
  xen/include/asm-arm/numa.h      |  1 +
  2 files changed, 29 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/numa_device_tree.c b/xen/arch/arm/numa_device_tree.c
index 6e0d1d3d9f..27ffb72f7b 100644
--- a/xen/arch/arm/numa_device_tree.c
+++ b/xen/arch/arm/numa_device_tree.c
@@ -131,7 +131,8 @@ save_memblk:
  }
/* Parse CPU NUMA node info */
-int __init device_tree_parse_numa_cpu_node(const void *fdt, int node)
+static int __init
+device_tree_parse_numa_cpu_node(const void *fdt, int node)
  {
      uint32_t nid;
@@ -147,7 +148,7 @@ int __init device_tree_parse_numa_cpu_node(const void *fdt, int node)
  }
/* Parse memory node NUMA info */
-int __init
+static int __init
  device_tree_parse_numa_memory_node(const void *fdt, int node,
      const char *name, uint32_t addr_cells, uint32_t size_cells)
  {
@@ -202,7 +203,7 @@ device_tree_parse_numa_memory_node(const void *fdt, int 
node,
  }
/* Parse NUMA distance map v1 */
-int __init
+static int __init
  device_tree_parse_numa_distance_map_v1(const void *fdt, int node)
  {
      const struct fdt_property *prop;
@@ -267,3 +268,27 @@ device_tree_parse_numa_distance_map_v1(const void *fdt, 
int node)
return 0;
  }
+
+static int __init fdt_scan_numa_nodes(const void *fdt,
+                int node, const char *uname, int depth,
+                u32 address_cells, u32 size_cells, void *data)
+{
+    int ret = 0;
+
+    if ( fdt_node_check_type(fdt, node, "cpu") == 0 )
+        ret = device_tree_parse_numa_cpu_node(fdt, node);
+    else if ( fdt_node_check_type(fdt, node, "memory") == 0 )
+        ret = device_tree_parse_numa_memory_node(fdt, node, uname,
+                                address_cells, size_cells);
+    else if ( fdt_node_check_compatible(fdt, node,
+                                "numa-distance-map-v1") == 0 )
+        ret = device_tree_parse_numa_distance_map_v1(fdt, node);
+
+    return ret;
+}

Julien, do you have an opinion on whether it might be worth reusing the
existing early_scan_node function for this to avoiding another full FDT
scan (to avoid another call to device_tree_for_each_node)?

I don't like the full FDT scan and actually drafted an e-mail in reply-to [1] to suggest parse all the NUMA information from early_scan_node().

However, we don't know whether ACPI or DT will be used at the time early_scan_node() is called. So we will need to revert any change which can make the code a little more awkward.

So I decided to drop my e-mail because I prefer the full DT scan for now. We can look at optimizing later if this becomes a pain point.

Cheers,

[1] https://lore.kernel.org/xen-devel/DB9PR08MB6857604B3D4B690F2B8832BD9EC79@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/

--
Julien Grall



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.