[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 03/60] xen/sched: let sched_switch_sched() return new lock address



>>> On 12.06.19 at 10:19, <jgross@xxxxxxxx> wrote:
> On 12.06.19 10:05, Andrew Cooper wrote:
>> On 28/05/2019 11:32, Juergen Gross wrote:
>>> @@ -1870,8 +1871,19 @@ int schedule_cpu_switch(unsigned int cpu, struct 
> cpupool *c)
>>>       old_lock = pcpu_schedule_lock_irq(cpu);
>>>   
>>>       vpriv_old = idle->sched_priv;
>>> -    ppriv_old = per_cpu(schedule_data, cpu).sched_priv;
>>> -    sched_switch_sched(new_ops, cpu, ppriv, vpriv);
>>> +    ppriv_old = sd->sched_priv;
>>> +    new_lock = sched_switch_sched(new_ops, cpu, ppriv, vpriv);
>>> +
>>> +    per_cpu(scheduler, cpu) = new_ops;
>>> +    sd->sched_priv = ppriv;
>>> +
>>> +    /*
>>> +     * (Re?)route the lock to the per pCPU lock as /last/ thing. In fact,
>>> +     * if it is free (and it can be) we want that anyone that manages
>>> +     * taking it, finds all the initializations we've done above in place.
>>> +     */
>>> +    smp_mb();
>> 
>> I realise you're just moving existing code, but this barrier sticks out
>> like a sore thumb.
>> 
>> A full memory barrier is a massive overhead for what should be
>> smp_wmb().  The matching barrier is actually hidden in the implicit
>> semantics of managing to lock sd->schedule_lock (which is trial an error
>> anyway), but the only thing that matters here is that all other written
>> data is in place first.
>> 
>> Beyond that, local causality will cause all reads to be in order (not
>> that the are important) due to logic dependencies.  Any that miss out on
>> this are a optimisation-waiting-to-happen as the compiler could elide
>> them fully.
> 
> Not that it would really matter for performance (switching cpus between
> cpupools is a _very_ rare operation), I'm fine transforming the barrier
> into smp_wmb().

This again is a change easy enough to make while committing. I'm
recording the above in case I end up being the committer.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.