[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH v2 15/17] xen/arm: Set correct per-cpu cpu_core_mask
From: Henry Wang <Henry.Wang@xxxxxxx> In the common sysctl command XEN_SYSCTL_physinfo, the cores_per_socket is calculated based on the cpu_core_mask of CPU0. Currently on Arm this is a fixed value 1 (can be checked via xl info), which is not correct. This is because during the Arm cpu online process, set_cpu_sibling_map() only sets the per-cpu cpu_core_mask for itself. cores_per_socket refers to the number of cores that belong to the same socket (NUMA node). Therefore, this commit introduces a helper function numa_set_cpu_core_mask(cpu), which sets the per-cpu cpu_core_mask to the cpus in the same NUMA node as cpu. Calling this function at the boot time can ensure the correct cpu_core_mask, leading to the correct cores_per_socket to be returned by XEN_SYSCTL_physinfo. Signed-off-by: Henry Wang <Henry.Wang@xxxxxxx> --- v1 -> v2: 1. New patch --- xen/arch/arm/include/asm/numa.h | 7 +++++++ xen/arch/arm/numa.c | 11 +++++++++++ xen/arch/arm/setup.c | 5 +++++ 3 files changed, 23 insertions(+) diff --git a/xen/arch/arm/include/asm/numa.h b/xen/arch/arm/include/asm/numa.h index a0b8d7a11c..e66fb0a11f 100644 --- a/xen/arch/arm/include/asm/numa.h +++ b/xen/arch/arm/include/asm/numa.h @@ -46,6 +46,7 @@ extern void numa_set_distance(nodeid_t from, nodeid_t to, extern void numa_detect_cpu_node(unsigned int cpu); extern int numa_device_tree_init(const void *fdt); extern void numa_init(void); +extern void numa_set_cpu_core_mask(int cpu); /* * Device tree NUMA doesn't have architecural node id. @@ -62,6 +63,12 @@ static inline unsigned int numa_node_to_arch_nid(nodeid_t n) #define cpu_to_node(cpu) 0 #define node_to_cpumask(node) (cpu_online_map) +static inline void numa_set_cpu_core_mask(int cpu) +{ + cpumask_or(per_cpu(cpu_core_mask, cpu), + per_cpu(cpu_core_mask, cpu), &cpu_possible_map); +} + /* * TODO: make first_valid_mfn static when NUMA is supported on Arm, this * is required because the dummy helpers are using it. diff --git a/xen/arch/arm/numa.c b/xen/arch/arm/numa.c index e9081d45ce..ef245e39a8 100644 --- a/xen/arch/arm/numa.c +++ b/xen/arch/arm/numa.c @@ -52,6 +52,17 @@ int __init arch_numa_setup(const char *opt) return -EINVAL; } +void numa_set_cpu_core_mask(int cpu) +{ + nodeid_t node = cpu_to_node[cpu]; + + if ( node == NUMA_NO_NODE ) + node = 0; + + cpumask_or(per_cpu(cpu_core_mask, cpu), + per_cpu(cpu_core_mask, cpu), &node_to_cpumask(node)); +} + void __init numa_set_distance(nodeid_t from, nodeid_t to, unsigned int distance) { diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c index 4cdc7e2edb..d45becedee 100644 --- a/xen/arch/arm/setup.c +++ b/xen/arch/arm/setup.c @@ -1136,6 +1136,11 @@ void __init start_xen(unsigned long boot_phys_offset, } printk("Brought up %ld CPUs\n", (long)num_online_cpus()); + + /* Set per-cpu cpu_core_mask to cpus that belongs to the same NUMA node. */ + for_each_online_cpu ( i ) + numa_set_cpu_core_mask(i); + /* TODO: smp_cpus_done(); */ /* This should be done in a vpmu driver but we do not have one yet. */ -- 2.25.1
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |