[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/3] xen/sched: remove cpu from pool0 before removing it



On 13.08.19 19:11, Dario Faggioli wrote:
On Fri, 2019-08-02 at 15:07 +0200, Juergen Gross wrote:
Today a cpu which is removed from the system is taken directly from
Pool0 to the offline state. This will conflict with the new idle
scheduler, so remove it from Pool0 first. Additionally accept
removing
a free cpu instead of requiring it to be in Pool0.

For the resume failed case we need to call the scheduler code for
that
situation after the cpupool handling, so move the scheduler code into
a function and call it from cpupool_cpu_remove_forced() and remove
the
CPU_RESUME_FAILED case from cpu_schedule_callback().

Note that we are calling now schedule_cpu_switch() in stop_machine
context so we need to switch from spinlock_irq to spinlock_irqsave.

Signed-off-by: Juergen Gross <jgross@xxxxxxxx>
---

--- a/xen/common/cpupool.c
+++ b/xen/common/cpupool.c
@@ -282,22 +282,14 @@ static int cpupool_assign_cpu_locked(struct
cpupool *c, unsigned int cpu)
      return 0;
  }
-static long cpupool_unassign_cpu_helper(void *info)
+static int cpupool_unassign_cpu_epilogue(struct cpupool *c)

in schedule.c, for a similar situation, we have used '_start' and
'_finish' as suffixes. What do you think about using those here too?

It's certainly a minor thing, I know, but I (personally) like them
better (especially than 'epilogue') and I think it gives us some
consistency (yes, sure, different files.. but scheduling and cpupools
are quite tightly related).

Okay, will rename.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.