[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH] xen: cpupool: forbid to split cores among different pools



On a system with hyperthreading, we currently allow putting cpus that
are SMT siblings in different cpupools. This is bad for a number of
reasons.

For instance, the schedulers can't know whether or not a core is fully
idle or not, if the threads of such core are in different pools. This
right now is a load-balancing/resource-efficiency problem. Furthermore,
if at some point we want to implement core-scheduling, that is also
impossible if hyperthreads are split among pools.

Therefore, let's start allowing in a cpupool only cpus that have their
SMT siblings, either:
- in that same pool,
- outside of any pool.

Signed-off-by: Dario Faggioli <dfaggioli@xxxxxxxx>
---
Cc: Juergen Gross <jgross@xxxxxxxx>
---
 xen/common/cpupool.c |   34 +++++++++++++++++++++++++++++-----
 1 file changed, 29 insertions(+), 5 deletions(-)

diff --git a/xen/common/cpupool.c b/xen/common/cpupool.c
index 1e8edcbd57..1e52fea5ac 100644
--- a/xen/common/cpupool.c
+++ b/xen/common/cpupool.c
@@ -264,10 +264,24 @@ int cpupool_move_domain(struct domain *d, struct cpupool 
*c)
 static int cpupool_assign_cpu_locked(struct cpupool *c, unsigned int cpu)
 {
     int ret;
+    unsigned int s;
     struct domain *d;
 
     if ( (cpupool_moving_cpu == cpu) && (c != cpupool_cpu_moving) )
         return -EADDRNOTAVAIL;
+
+    /*
+     * If we have SMT, we only allow a new cpu in, if its siblings are either
+     * in this same cpupool too, or outside of any pool.
+     */
+
+    for_each_cpu(s, per_cpu(cpu_sibling_mask, cpu))
+    {
+        if ( !cpumask_test_cpu(s, c->cpu_valid) &&
+             !cpumask_test_cpu(s, &cpupool_free_cpus) )
+            return -EBUSY;
+    }
+
     ret = schedule_cpu_switch(cpu, c);
     if ( ret )
         return ret;
@@ -646,18 +660,28 @@ int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op)
         cpupool_dprintk("cpupool_assign_cpu(pool=%d,cpu=%d)\n",
                         op->cpupool_id, cpu);
         spin_lock(&cpupool_lock);
+        c = cpupool_find_by_id(op->cpupool_id);
+        ret = -ENOENT;
+        if ( c == NULL )
+            goto addcpu_out;
+        /* Pick a cpu from free cores, or from cores with cpus already in c */
         if ( cpu == XEN_SYSCTL_CPUPOOL_PAR_ANY )
-            cpu = cpumask_first(&cpupool_free_cpus);
+        {
+            for_each_cpu(cpu, &cpupool_free_cpus)
+            {
+                const cpumask_t *siblings = per_cpu(cpu_sibling_mask, cpu);
+
+                if ( cpumask_intersects(siblings, c->cpu_valid) ||
+                     cpumask_subset(siblings, &cpupool_free_cpus) )
+                    break;
+            }
+        }
         ret = -EINVAL;
         if ( cpu >= nr_cpu_ids )
             goto addcpu_out;
         ret = -ENODEV;
         if ( !cpumask_test_cpu(cpu, &cpupool_free_cpus) )
             goto addcpu_out;
-        c = cpupool_find_by_id(op->cpupool_id);
-        ret = -ENOENT;
-        if ( c == NULL )
-            goto addcpu_out;
         ret = cpupool_assign_cpu_locked(c, cpu);
     addcpu_out:
         spin_unlock(&cpupool_lock);


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.