[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] x86/hpet: Improve handling of timer_deadline



>>> On 15.08.17 at 15:13, <andrew.cooper3@xxxxxxxxxx> wrote:
> timer_deadline is only ever updated via this_cpu() in timer_softirq_action(),
> so is not going to change behind the back of the currently running cpu.
> 
> Update hpet_broadcast_{enter,exit}() to cache the value in a local variable to
> avoid the repeated RELOC_HIDE() penalty.
> 
> handle_hpet_broadcast() reads the timer_deadlines of remote cpus, but there is
> no need to force the read for cpus which are not present in the mask.  One
> requirement is that we only sample the value once (which happens as a side
> effect of RELOC_HIDE()), but is made more explicit with ACCESS_ONCE().
> 
> Bloat-o-meter shows a modest improvement:
> 
>   add/remove: 0/0 grow/shrink: 0/3 up/down: 0/-144 (-144)
>   function                                     old     new   delta
>   hpet_broadcast_exit                          335     313     -22
>   hpet_broadcast_enter                         327     278     -49
>   handle_hpet_broadcast                        572     499     -73
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>

Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
with one nit:

> @@ -714,9 +714,12 @@ void hpet_broadcast_enter(void)
>      cpumask_set_cpu(cpu, ch->cpumask);
>  
>      spin_lock(&ch->lock);
> -    /* reprogram if current cpu expire time is nearer */
> -    if ( per_cpu(timer_deadline, cpu) < ch->next_event )
> -        reprogram_hpet_evt_channel(ch, per_cpu(timer_deadline, cpu), NOW(), 
> 1);
> +    /*
> +     * reprogram if current cpu expire time is nearer.  deadline is never
> +     * written by a remote cpu, so the value read earlier is still valid.
> +     */

Comments should start with an upper case letter.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.