[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 01/19] xen: sched: leave CPUs doing tasklet work alone.



On 18/06/16 00:11, Dario Faggioli wrote:
In both Credit1 and Credit2, stop considering a pCPU idle,
if the reason why the idle vCPU is being selected, is to
do tasklet work.

Not doing so means that the tickling and load balancing
logic, seeing the pCPU as idle, considers it a candidate
for picking up vCPUs. But the pCPU won't actually pick
up or schedule any vCPU, which would then remain in the
runqueue, which is bas, especially if there were other,
truly idle pCPUs, that could execute it.

The only drawback is that we can't assume that a pCPU is
in always marked as idle when being removed from an
instance of the Credit2 scheduler (csched2_deinit_pdata).
In fact, if we are in stop-machine (i.e., during suspend
or shutdown), the pCPUs are running the stopmachine_tasklet
and hence are actually marked as busy. On the other hand,
when removing a pCPU from a Credit2 pool, it will indeed
be idle. The only thing we can do, therefore, is to
remove the BUG_ON() check.

Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>

Reviewed-by: Anshul Makkar <anshul.makkar@xxxxxxxxxx>
---
Cc: George Dunlap <george.dunlap@xxxxxxxxxx>
Cc: Anshul Makkar <anshul.makkar@xxxxxxxxxx>
Cc: David Vrabel <david.vrabel@xxxxxxxxxx>
---
  xen/common/sched_credit.c  |   12 ++++++------
  xen/common/sched_credit2.c |   14 ++++++++++----
  2 files changed, 16 insertions(+), 10 deletions(-)

diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
index a38a63d..a6645a2 100644
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -1819,24 +1819,24 @@ csched_schedule(
      else
          snext = csched_load_balance(prv, cpu, snext, &ret.migrated);

+ out:
      /*
       * Update idlers mask if necessary. When we're idling, other CPUs
       * will tickle us when they get extra work.
       */
-    if ( snext->pri == CSCHED_PRI_IDLE )
+    if ( tasklet_work_scheduled || snext->pri != CSCHED_PRI_IDLE )
      {
-        if ( !cpumask_test_cpu(cpu, prv->idlers) )
-            cpumask_set_cpu(cpu, prv->idlers);
+        if ( cpumask_test_cpu(cpu, prv->idlers) )
+            cpumask_clear_cpu(cpu, prv->idlers);
      }
-    else if ( cpumask_test_cpu(cpu, prv->idlers) )
+    else if ( !cpumask_test_cpu(cpu, prv->idlers) )
      {
-        cpumask_clear_cpu(cpu, prv->idlers);
+        cpumask_set_cpu(cpu, prv->idlers);
      }

      if ( !is_idle_vcpu(snext->vcpu) )
          snext->start_time += now;

-out:
      /*
       * Return task to run next...
       */
diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
index 1933ff1..cf8455c 100644
--- a/xen/common/sched_credit2.c
+++ b/xen/common/sched_credit2.c
@@ -1910,8 +1910,16 @@ csched2_schedule(
      }
      else
      {
-        /* Update the idle mask if necessary */
-        if ( !cpumask_test_cpu(cpu, &rqd->idle) )
+        /*
+         * Update the idle mask if necessary. Note that, if we're scheduling
+         * idle in order to carry on some tasklet work, we want to play busy!
+         */
+        if ( tasklet_work_scheduled )
+        {
+            if ( cpumask_test_cpu(cpu, &rqd->idle) )
+                cpumask_clear_cpu(cpu, &rqd->idle);
+        }
+        else if ( !cpumask_test_cpu(cpu, &rqd->idle) )
              cpumask_set_cpu(cpu, &rqd->idle);
          /* Make sure avgload gets updated periodically even
           * if there's no activity */
@@ -2291,8 +2299,6 @@ csched2_deinit_pdata(const struct scheduler *ops, void 
*pcpu, int cpu)
      /* No need to save IRQs here, they're already disabled */
      spin_lock(&rqd->lock);

-    BUG_ON(!cpumask_test_cpu(cpu, &rqd->idle));
-
      printk("Removing cpu %d from runqueue %d\n", cpu, rqi);

      cpumask_clear_cpu(cpu, &rqd->idle);



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.