[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/4] xen/x86: Drop erronious barriers



>>> On 05.12.16 at 14:43, <andrew.cooper3@xxxxxxxxxx> wrote:
> On 05/12/16 12:28, Jan Beulich wrote:
>>>>> On 05.12.16 at 12:25, <andrew.cooper3@xxxxxxxxxx> wrote:
>>> On 05/12/16 11:18, Jan Beulich wrote:
>>>>>>> On 05.12.16 at 11:05, <andrew.cooper3@xxxxxxxxxx> wrote:
>>>>> --- a/xen/arch/x86/smpboot.c
>>>>> +++ b/xen/arch/x86/smpboot.c
>>>>> @@ -346,7 +346,6 @@ void start_secondary(void *unused)
>>>>>      spin_debug_enable();
>>>>>      set_cpu_sibling_map(cpu);
>>>>>      notify_cpu_starting(cpu);
>>>>> -    wmb();
>>>>>  
>>>>>      /*
>>>>>       * We need to hold vector_lock so there the set of online cpus
>>>> Hmm, this one is indeed redundant with the lock_vector_lock()
>>>> following right below, but if that lock wasn't there, I think it
>>>> would be needed to order set_cpu_sibling_map() and the
>>>> setting of the bit in the online map. So I think it would better
>>>> stay (but be relaxed to smb_wmb()).
>>> Why?  It doesn't relate to an smp_rmb() on any other CPU, and is
>>> therefore wrong to use.
>> I think it does, just not with one that's spelled out as smp_rmb().
>> Instead __cpu_up() spins on !cpu_online(), using cpu_relax() as
>> a de-facto equivalent of smp_rmb().
> 
> __cpu_up() is ordered with the cpumask_set_cpu(cpu, &cpu_online_map)
> between the two context hunks.

Exactly - so here we need the write side to that, unless (like
suggested elsewhere) we mean to make use of the barrier
implied by the LOCKed instruction which cpumask_set_cpu()
resolves to. Such "amortization", however, would better be
recorded in a comment, so that it becomes less likely for a
future change to eliminate a required barrier altogether.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.