[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 08/48] xen/sched: switch vcpu_schedule_lock to unit_schedule_lock



On 04.09.19 16:02, Jan Beulich wrote:
On 09.08.2019 16:57, Juergen Gross wrote:
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -250,7 +250,8 @@ static inline void vcpu_runstate_change(
void vcpu_runstate_get(struct vcpu *v, struct vcpu_runstate_info *runstate)
  {
-    spinlock_t *lock = likely(v == current) ? NULL : vcpu_schedule_lock_irq(v);
+    spinlock_t *lock = likely(v == current)
+                       ? NULL : unit_schedule_lock_irq(v->sched_unit);
      s_time_t delta;
memcpy(runstate, &v->runstate, sizeof(*runstate));
@@ -259,7 +260,7 @@ void vcpu_runstate_get(struct vcpu *v, struct 
vcpu_runstate_info *runstate)
          runstate->time[runstate->state] += delta;
if ( unlikely(lock != NULL) )
-        vcpu_schedule_unlock_irq(lock, v);
+        unit_schedule_unlock_irq(lock, v->sched_unit);
  }

At the example of this: The more coarse granularity of the lock
means that no two vCPU-s within a unit can obtain their runstate
in parallel. While this may be acceptable for core scheduling,
I'm afraid it's too restrictive for sockets or nodes as units.
Therefore I think this lock needs to either be split (I'm not
sure that's feasible) or become an r/w lock.

You are aware that even today with credit2 all cpus of a socket share
the same lock (if not modified via boot parameter)?


Juergen


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.