[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 3/9] xen: sched: make locking for {insert, remove}_vcpu consistent



On Tue, 2015-09-29 at 18:31 +0100, Andrew Cooper wrote:
> On 29/09/15 17:55, Dario Faggioli wrote:
> > The insert_vcpu() scheduler hook is called with an
> > inconsistent locking strategy. In fact, it is sometimes
> > invoked while holding the runqueue lock and sometimes
> > when that is not the case.
> > 
> > In other words, some call sites seems to imply that
> > locking should be handled in the callers, in schedule.c
> > --e.g., in schedule_cpu_switch(), which acquires the
> > runqueue lock before calling the hook; others that
> > specific schedulers should be responsible for locking
> > themselves --e.g., in sched_move_domain(), which does
> > not acquire any lock for calling the hook.
> > 
> > The right thing to do seems to always defer locking to
> > the specific schedulers, as it's them that know what, how
> > and when it is best to lock (as in: runqueue locks, vs.
> > private scheduler locks, vs. both, etc.)
> > 
> > This patch, therefore:
> >  - removes any locking around insert_vcpu() from
> >    generic code (schedule.c);
> >  - add the _proper_ locking in the hook implementations,
> >    depending on the scheduler (for instance, credit2
> >    does that already, credit1 and RTDS need to grab
> >    the runqueue lock while manipulating runqueues).
> > 
> > In case of credit1, remove_vcpu() handling needs some
> > fixing remove_vcpu() too, i.e.:
> >  - it manipulates runqueues, so the runqueue lock must
> >    be acquired;
> >  - *_lock_irq() is enough, there is no need to do
> >    _irqsave()
> 
> Nothing in any of generic scheduling code should need interrupts
> disabled at all.
> 
That's a really, really, really interesting point.

I think I see what you mean. However, currently, pretty much **all**
scheduling related locks are acquired via _irq or _irqsave primitives,
and that is true for schedule.c and for all the sched_*.c files.

> One of the problem-areas identified by Jenny during the ticketlock
> performance work was that the SCHEDULE_SOFTIRQ was a large consumer
> of
> time with interrupts disabled.
>
Right, and I am very much up for investigating whether this can
improve. However, this seems to me the topic for a different series.

> Is the use of _lock_irq() here to cover another issue expecting
> interrupts to be disabled, or could it be replaced with a plain
> spin_lock()?
> 
As said, it is probably the case that spin_lock() would be ok, here as
well as elsewhere. This is being done like this in this patch for
consistency, as that is what happens **everywhere** else in scheduling
code. In fact, I haven't tried, but it may well be the case that,
converting only one (or a subset) of locks to non _irq* variants, we'd
make check_lock() complain.

So, can we just allow this patch to follow suit, and then overhaul and
change/fix (if it reveals feasible) all locking at once, in a dedicated
series? This seems the best approach to me...

> Also, a style nit...
>

> > diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
> > index a1945ac..557efaa 100644

> > @@ -935,15 +946,18 @@ csched_vcpu_remove(const struct scheduler
> > *ops, struct vcpu *vc)
> >          vcpu_unpause(svc->vcpu);
> >      }
> >  
> > +    lock = vcpu_schedule_lock_irq(vc);
> > +
> >      if ( __vcpu_on_runq(svc) )
> >          __runq_remove(svc);
> >  
> > -    spin_lock_irqsave(&(prv->lock), flags);
> > +    spin_lock(&(prv->lock));
> 
> Please drop the superfluous brackets as you are already changing the
> line.
> 
This, I can do for sure. :-)

Thanks and Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.