[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [RFC PATCH 3/3] sched/credit: Avoid doing cpu_pick calculation twice when deciding to move



In vcpu_acct(), we call _csched_cpu_pick() in order to decide whether
to consider migrating; but then we throw that result away and do it
again in context_saved() if we decide we do need to move.

Instead, just initiate the migration and let the vcpu_migrate_finish()
in context_saved() determine if it should make any changes.

Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxx>
---
I had considered calling vcpu_migrate_start() instead, but that seemed
a bit overkill given that I know it's currently running; same with the other
instance of _VPF_migrating.

CC: Dario Faggioli <dfaggioli@xxxxxxxx>
---
 xen/common/sched_credit.c | 26 +++++++++-----------------
 xen/common/schedule.c     |  6 ------
 2 files changed, 9 insertions(+), 23 deletions(-)

diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
index 9bc638c09c..bf3ed09f5d 100644
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -940,7 +940,6 @@ static void
 csched_vcpu_acct(struct csched_private *prv, unsigned int cpu)
 {
     struct csched_vcpu * const svc = CSCHED_VCPU(current);
-    const struct scheduler *ops = per_cpu(scheduler, cpu);
 
     ASSERT( current->processor == cpu );
     ASSERT( svc->sdom != NULL );
@@ -973,7 +972,6 @@ csched_vcpu_acct(struct csched_private *prv, unsigned int 
cpu)
     }
     else
     {
-        unsigned int new_cpu;
         unsigned long flags;
         spinlock_t *lock = vcpu_schedule_lock_irqsave(current, &flags);
 
@@ -982,24 +980,18 @@ csched_vcpu_acct(struct csched_private *prv, unsigned int 
cpu)
          * migrating it to run elsewhere (see multi-core and multi-thread
          * support in csched_cpu_pick()).
          */
-        new_cpu = _csched_cpu_pick(ops, current, 0);
+        set_bit(_VPF_migrating, &current->pause_flags);
+        cpu_raise_softirq(cpu, SCHEDULE_SOFTIRQ);
 
         vcpu_schedule_unlock_irqrestore(lock, flags, current);
 
-        if ( new_cpu != cpu )
-        {
-            SCHED_VCPU_STAT_CRANK(svc, migrate_r);
-            SCHED_STAT_CRANK(migrate_running);
-            set_bit(_VPF_migrating, &current->pause_flags);
-            /*
-             * As we are about to tickle cpu, we should clear its bit in
-             * idlers. But, if we are here, it means there is someone running
-             * on it, and hence the bit must be zero already.
-             */
-            ASSERT(!cpumask_test_cpu(cpu,
-                                     CSCHED_PRIV(per_cpu(scheduler, 
cpu))->idlers));
-            cpu_raise_softirq(cpu, SCHEDULE_SOFTIRQ);
-        }
+        /*
+         * As we are about to tickle cpu, we should clear its bit in
+         * idlers. But, if we are here, it means there is someone running
+         * on it, and hence the bit must be zero already.
+         */
+        ASSERT(!cpumask_test_cpu(cpu,
+                                 CSCHED_PRIV(per_cpu(scheduler, 
cpu))->idlers));
     }
 }
 
diff --git a/xen/common/schedule.c b/xen/common/schedule.c
index 5da2a7d8c8..f6fa0e7e4d 100644
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -593,12 +593,6 @@ static void vcpu_move_nosched(struct vcpu *v, unsigned int 
new_cpu)
     sched_move_irqs(v);
 }
 
-/*
- * Initiating migration
- * 
- * In order to migrate, we need the vcpu in question to have stopped
- * running and be taken off the runqueues; we also need to hold the lock 
- */
 void vcpu_migrate_start(struct vcpu *v)
 {
     set_bit(_VPF_migrating, &v->pause_flags);
-- 
2.16.3


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.