[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/hpet: add a lock when cpu clear cpumask in hpet_broadcast_exit();



>>> On 16.04.18 at 10:00, <Davidwang@xxxxxxxxxxx> wrote:
> By the hpet_get_channel(), cpus share an in-use channel somtime.
> So, core shouldn't clear cpumask while others are getting first
> cpumask. If core zero and core one share an channel, the cpumask
> is 0x3. Core zero clear cpumask between core one executing
> cpumask_empty() and cpumask_first(). The return of cpumask_first()
> is nr_cpu_ids. That would lead to ASSERT(cpu < nr_cpu_ids).

I can see your point, but that's in hpet_detach_channel() afaics,
which your description doesn't mention at all. And the assertion
would - afaict - happen through hpet_detach_channel() ->
set_channel_irq_affinity() -> cpumask_of() (as of e8bf5addc9).

Please realize that it helps review quite a bit if you write concise
descriptions for your changes.

> --- a/xen/arch/x86/hpet.c
> +++ b/xen/arch/x86/hpet.c
> @@ -740,7 +740,9 @@ void hpet_broadcast_exit(void)
>      if ( !reprogram_timer(deadline) )
>          raise_softirq(TIMER_SOFTIRQ);
>  
> +    spin_lock_irq(&ch->lock);
>      cpumask_clear_cpu(cpu, ch->cpumask);
> +    spin_unlock_irq(&ch->lock);

Rather than this, how about eliminating the cpumask_empty() call
in favor of just the cpumask_first() one in hpet_detach_channel()
(with a local variable storing the intermediate result)? Or if acquiring
the locking can't be avoided here, you would perhaps better not
drop it before calling hpet_detach_channel() (which has only this
single call site and hence would be straightforward to adjust).

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.