[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 09/16] xen: sched: close potential races when switching scheduler to CPUs



On 18/03/16 19:05, Dario Faggioli wrote:
> by using the sched_switch hook that we have introduced in
> the various schedulers.
> 
> The key is to let the actual switch of scheduler and the
> remapping of the scheduler lock for the CPU (if necessary)
> happen together (in the same critical section) protected
> (at least) by the old scheduler lock for the CPU.

Thanks for trying to sort this out -- I've been looking this since
yesterday afternoon and it certainly makes my head hurt. :-)

It looks like you want to do the locking inside the sched_switch()
callback, rather than outside of it, so that you can get the locking
order right (global private before per-cpu scheduler lock).  Otherwise
you could just have schedule_cpu_switch grab and release the lock, and
let the sched_switch() callback set the lock as needed (knowing that the
correct lock is already held and will be released).

But the ordering between prv->lock and the scheduler lock only needs to
be between the prv lock and scheduler lock *of a specific instance* of
the credit2 scheduler -- i.e., between prv->lock and prv->rqd[].lock.

And, critically, if we're calling sched_switch, then we already know
that the current pcpu lock is *not* one of the prv->rqd[].lock's because
we check that at the top of schedule_cpu_switch().

So I think there should be no problem with:
1. Grabbing the pcpu schedule lock in schedule_cpu_switch()
2. Grabbing prv->lock in csched2_switch_sched()
3. Setting the per_cpu schedule lock as the very last thing in
csched2_switch_sched()
4. Releasing the (old) pcpu schedule lock in schedule_cpu_switch().

What do you think?

That would allow us to read ppriv_old and vpriv_old with the
schedule_lock held.

Unfortunately I can't off the top of my head think of a good assertion
to put in at #2 to assert that the per-pcpu lock is *not* one of
runqueue locks in prv, because we don't yet know which runqueue this cpu
will be assigned to.  But we could check when we actually do the lock
assignment to make sure that it's not already equal.  That way we'll
either deadlock or ASSERT (which is not as good as always ASSERTing, but
is better than either deadlocking or working fine).

As an aside -- it seems to me that as soon as we change the scheduler
lock, there's a risk that something else may come along and try to grab
it / access the data.  Does that mean we really ought to use memory
barriers to make sure that the lock is written only after all changes to
the scheduler data have been appropriately made?

> This also means that, in Credit2 and RTDS, we can get rid
> of the code that was doing the scheduler lock remapping
> in csched2_free_pdata() and rt_free_pdata(), and of their
> triggering ASSERT-s.

Right -- so to put it a different way, *all* schedulers must now set the
locking scheme they wish to use, even if they want to use the default
per-cpu locks.  I think that means we have to do that for arinc653 too,
right?

At first I thought we could look at having schedule_cpu_switch() always
reset the lock before calling the switch_sched() callback; but if my
comment about memory barriers is accurate, then that won't work either.
 In any case, there are only 4 schedulers, so it's not that hard to just
have them all set the locking scheme they want.

 -George


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.