[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] 答复: [PATCH] x86/hpet: add a lock when cpu clear cpumask in hpet_broadcast_exit();



>>> On 17.04.18 at 07:44, <DavidWang@xxxxxxxxxxx> wrote:
> 发件人: Jan Beulich <JBeulich@xxxxxxxx>
> 发送时间: 2018年4月16日 19:48
>>>> On 16.04.18 at 10:00, <Davidwang@xxxxxxxxxxx> wrote:
>> --- a/xen/arch/x86/hpet.c
>> +++ b/xen/arch/x86/hpet.c
>> @@ -740,7 +740,9 @@ void hpet_broadcast_exit(void)
>>      if ( !reprogram_timer(deadline) )
>>          raise_softirq(TIMER_SOFTIRQ);
>>  
>> +    spin_lock_irq(&ch->lock);
>>      cpumask_clear_cpu(cpu, ch->cpumask);
>> +    spin_unlock_irq(&ch->lock);
> 
> Rather than this, how about eliminating the cpumask_empty() call
> in favor of just the cpumask_first() one in hpet_detach_channel()
> (with a local variable storing the intermediate result)? Or if acquiring
> the locking can't be avoided here, you would perhaps better not
> drop it before calling hpet_detach_channel() (which has only this
> single call site and hence would be straightforward to adjust).
> 
> [DavidWang]:  The experiment proved that a local variable storing the 
> intermediate result  can slove the problem. On one hand a local variable is 
> more efficient than adding lock, On the other it is not very clear for 
> reading. In fact, In hpet_detach_channel(), a lock for ch->lock will be 
> added. 
>  Can we move the lock( in hpet_detach_channel()) backward  to calling 
> cpumask_clear_cpu()  and drop it in function (hpet_detach_channel()) ?
> Looking forward to your suggestion.

I'd certainly prefer the local variable approach - I'm not sure why you think
this introduces an issue with readability. Quite the inverse I would say, it
eliminates the problematic interdependency between the cpumask_empty()
and cpumask_first() calls. It is only if there's a _technical_ reason speaking
against this approach when I'd view the revised locking as a viable
alternative.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.