[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] RCU: reimplement RCU barrier to avoid deadlock



On 28/01/2020 09:32, Julien Grall wrote:
> On 27/01/2020 18:56, Igor Druzhinin wrote:
>> The existing RCU barrier implementation is prone to a deadlock scenario
>> due to IRQs being re-enabled inside stopmachine context. If due to a race
>> IRQs are re-enabled on some of CPUs and softirqs are allowed to be
>> processed in stopmachine, i.e. what currently happens in rcu_barrier(),
>> timer interrupt is able to invoke TSC synchronization rendezvous.
>> At this moment sending TSC synchronization IPI will stall waiting for
>> other CPUs to synchronize while they in turn are waiting in stopmachine
>> busy loop with IRQs disabled.
>>
>> To avoid the scenario above - reimplement rcu_barrier() in a way where
>> IRQs are not being disabled at any moment. The proposed implementation
>> is just a simplified and specialized version of stopmachine. The semantic
>> of the call is preserved.
> stop_machine_run() is used in a few places within Xen. Why is this a problem 
> for rcu_barrier() and not for the other callers?

It's true that some of them do re-enable interrupts (__cpu_disable).
The reason they are not prone to the described issue is that currently
there is no, likely, interrupt handler that might lockup the system.
Nevertheless, there are softirq handlers that do this (TSC sync) and
rcu_barrier() has to call process_pending_softirqs() inside stopmachine
context due to the nature of its implementation.

>> Signed-off-by: Igor Druzhinin <igor.druzhinin@xxxxxxxxxx>
>> ---
>> This change has been stress tested by doing actions invoking rcu_barrier()
>> functionality and didn't show any issues.
>> ---
>>   xen/common/rcupdate.c | 36 ++++++++++++++++++++++++++----------
>>   1 file changed, 26 insertions(+), 10 deletions(-)
>>
>> diff --git a/xen/common/rcupdate.c b/xen/common/rcupdate.c
>> index cb712c8..95a1f85 100644
>> --- a/xen/common/rcupdate.c
>> +++ b/xen/common/rcupdate.c
>> @@ -145,6 +145,9 @@ struct rcu_barrier_data {
>>       atomic_t *cpu_count;
>>   };
>>   +static DEFINE_PER_CPU(struct tasklet, rcu_barrier_tasklet);
>> +static atomic_t rcu_barrier_cpu_count, rcu_barrier_cpu_done;
>> +
>>   static void rcu_barrier_callback(struct rcu_head *head)
>>   {
>>       struct rcu_barrier_data *data = container_of(
>> @@ -152,12 +155,9 @@ static void rcu_barrier_callback(struct rcu_head *head)
>>       atomic_inc(data->cpu_count);
>>   }
>>   -static int rcu_barrier_action(void *_cpu_count)
>> +static void rcu_barrier_action(void *unused)
>>   {
>> -    struct rcu_barrier_data data = { .cpu_count = _cpu_count };
>> -
>> -    ASSERT(!local_irq_is_enabled());
>> -    local_irq_enable();
>> +    struct rcu_barrier_data data = { .cpu_count = &rcu_barrier_cpu_count };
>>         /*
>>        * When callback is executed, all previously-queued RCU work on this 
>> CPU
>> @@ -172,15 +172,30 @@ static int rcu_barrier_action(void *_cpu_count)
>>           cpu_relax();
>>       }
>>   -    local_irq_disable();
>> -
>> -    return 0;
>> +    atomic_inc(&rcu_barrier_cpu_done);
>>   }
>>     int rcu_barrier(void)
>>   {
> 
> stop_machine_run() requires the interrupts to be enabled when called. Is this 
> requirement still the same here? If so, can we document it and add an ASSERT?

Sure, will add.

>> -    atomic_t cpu_count = ATOMIC_INIT(0);
>> -    return stop_machine_run(rcu_barrier_action, &cpu_count, NR_CPUS);
>> +    unsigned int i;
>> +
>> +    if ( !get_cpu_maps() )
>> +        return -EBUSY;
> 
> I realize this is also present in the current implementation. However, nobody 
> seems to check the return of the barrier. What would happen if you continue 
> without synchronizing the RCU?

Probably a crash, as from what I saw the existing callers rely on it
finishing the task. I either need to change the semantics of the call
or fix up the callers that might be affected. I'd prefer to do
the latter in a follow up patch.

Igor

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.