[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] sched: fix race between sched_move_domain() and vcpu_wake()

On 10/10/13 18:29, David Vrabel wrote:
From: David Vrabel <david.vrabel@xxxxxxxxxx>

sched_move_domain() changes v->processor for all the domain's VCPUs.
If another domain, softirq etc. triggers a simultaneous call to
vcpu_wake() (e.g., by setting an event channel as pending), then
vcpu_wake() may lock one schedule lock and try to unlock another.

vcpu_schedule_lock() attempts to handle this but only does so for the
window between reading the schedule_lock from the per-CPU data and the
spin_lock() call.  This does not help with sched_move_domain()
changing v->processor between the calls to vcpu_schedule_lock() and

Fix the race by taking the schedule_lock for v->processor in

Signed-off-by: David Vrabel <david.vrabel@xxxxxxxxxx>
Cc: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
Cc: Juergen Gross <juergen.gross@xxxxxxxxxxxxxx>
Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>

Just taking the lock for the old processor seemed sufficient to me as
anything seeing the new value would lock and unlock using the same new
value.  But do we need to take the schedule_lock for the new processor
as well (in the right order of course)?

So going through the code and trying to reconstruct all the state in my head...

If you look at vcpu_migrate(), it grabs both locks. But it looks like the main purpose for that is so that we can call the migrate SCHED_OP(), which for credit2 needs to do some mucking about with runqueues, and thus needs both locks. In the case of move_domain, this is unnecessary, since it is removed from the old scheduler and then added to the new one.

In a sense, Andrew, you're right: if you change v->processor, then you no longer hold v's schedule lock (unless you do what vcpu_migrate() does, and grab the lock of the processor you're moving to as well). In this case, it doesn't matter, because you're just about to release the lock anyway. But it may be misleading to people in the future trying to figure out what the right thing is to do -- we should at very least put a comment saying that changing v->processor without having the new lock effectively unlocks v, so don't do any more changes to the processor state. (Or we can do as Keir says, and do the double-locking, but that's a bit of a pain, as you can see from vcpu_migrate().)

But I think this patch is still not quite right: both v->processor and per_cpu(schedule_data, ...).schedule_lock may change under your feet; so you always need to do the lock in a loop, checking to make sure that you *still* have the right lock after you have actually grabbed it.

The gears on this code are rusty, however, so please do double-check my thinking here...


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.