[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v5 4/8] x86/mm/tlb: Flush remote and local TLBs concurrently



> On Feb 16, 2021, at 4:10 AM, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
> 
> On Tue, Feb 09, 2021 at 02:16:49PM -0800, Nadav Amit wrote:
>> @@ -816,8 +821,8 @@ STATIC_NOPV void native_flush_tlb_others(const struct 
>> cpumask *cpumask,
>>       * doing a speculative memory access.
>>       */
>>      if (info->freed_tables) {
>> -            smp_call_function_many(cpumask, flush_tlb_func,
>> -                           (void *)info, 1);
>> +            on_each_cpu_cond_mask(NULL, flush_tlb_func, (void *)info, true,
>> +                                  cpumask);
>>      } else {
>>              /*
>>               * Although we could have used on_each_cpu_cond_mask(),
>> @@ -844,14 +849,15 @@ STATIC_NOPV void native_flush_tlb_others(const struct 
>> cpumask *cpumask,
>>                      if (tlb_is_not_lazy(cpu))
>>                              __cpumask_set_cpu(cpu, cond_cpumask);
>>              }
>> -            smp_call_function_many(cond_cpumask, flush_tlb_func, (void 
>> *)info, 1);
>> +            on_each_cpu_cond_mask(NULL, flush_tlb_func, (void *)info, true,
>> +                                  cpumask);
>>      }
>> }
> 
> Surely on_each_cpu_mask() is more appropriate? There the compiler can do
> the NULL propagation because it's on the same TU.
> 
> --- a/arch/x86/mm/tlb.c
> +++ b/arch/x86/mm/tlb.c
> @@ -821,8 +821,7 @@ STATIC_NOPV void native_flush_tlb_multi(
>        * doing a speculative memory access.
>        */
>       if (info->freed_tables) {
> -             on_each_cpu_cond_mask(NULL, flush_tlb_func, (void *)info, true,
> -                                   cpumask);
> +             on_each_cpu_mask(cpumask, flush_tlb_func, (void *)info, true);
>       } else {
>               /*
>                * Although we could have used on_each_cpu_cond_mask(),
> @@ -849,8 +848,7 @@ STATIC_NOPV void native_flush_tlb_multi(
>                       if (tlb_is_not_lazy(cpu))
>                               __cpumask_set_cpu(cpu, cond_cpumask);
>               }
> -             on_each_cpu_cond_mask(NULL, flush_tlb_func, (void *)info, true,
> -                                   cpumask);
> +             on_each_cpu_mask(cpumask, flush_tlb_func, (void *)info, true);
>       }
> }

Indeed, and there is actually an additional bug - I used cpumask in the
second on_each_cpu_cond_mask() instead of cond_cpumask.




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.