[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH 19/19] xen: credit2: use cpumask_first instead of cpumask_any when choosing cpu



because it is cheaper, and there is no much point in
randomizing which cpu gets selected anyway, as such
choice will be overridden shortly after, in runq_tickle().

If we really feel the need (e.g., we prove it worth with
benchmarking), we can record the last cpu which was used
by csched2_cpu_pick() and migrate() in a per-runq variable,
and then use cpumask_cycle()... but this really does not
look necessary.

Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
---
Cc: George Dunlap <george.dunlap@xxxxxxxxxx>
Cc: Anshul Makkar <anshul.makkar@xxxxxxxxxx>
Cc: David Vrabel <david.vrabel@xxxxxxxxxx>
---
 xen/common/sched_credit2.c |    6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
index a8b3a85..afd432e 100644
--- a/xen/common/sched_credit2.c
+++ b/xen/common/sched_credit2.c
@@ -1545,7 +1545,7 @@ csched2_cpu_pick(const struct scheduler *ops, struct vcpu 
*vc)
         {
             cpumask_and(cpumask_scratch, vc->cpu_hard_affinity,
                         &svc->migrate_rqd->active);
-            new_cpu = cpumask_any(cpumask_scratch);
+            new_cpu = cpumask_first(cpumask_scratch);
             if ( new_cpu < nr_cpu_ids )
                 goto out_up;
         }
@@ -1604,7 +1604,7 @@ csched2_cpu_pick(const struct scheduler *ops, struct vcpu 
*vc)
 
     cpumask_and(cpumask_scratch, vc->cpu_hard_affinity,
                 &prv->rqd[min_rqi].active);
-    new_cpu = cpumask_any(cpumask_scratch);
+    new_cpu = cpumask_first(cpumask_scratch);
     BUG_ON(new_cpu >= nr_cpu_ids);
 
  out_up:
@@ -1718,7 +1718,7 @@ static void migrate(const struct scheduler *ops,
 
         cpumask_and(cpumask_scratch, svc->vcpu->cpu_hard_affinity,
                     &trqd->active);
-        svc->vcpu->processor = cpumask_any(cpumask_scratch);
+        svc->vcpu->processor = cpumask_first(cpumask_scratch);
         ASSERT(svc->vcpu->processor < nr_cpu_ids);
 
         __runq_assign(svc, trqd);


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.