[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: [PATCH] cpuidle: add comments for hpet cpumask_lock usage



Thanks! Jan

>>> On 18.06.10 at 05:50, "Wei, Gang" <gang.wei@xxxxxxxxx> wrote:
> cpuidle: add comments for hpet cpumask_lock usage
> 
> Signed-off-by: Wei Gang <gang.wei@xxxxxxxxx>
> 
> diff -r 764e41b09017 xen/arch/x86/hpet.c
> --- a/xen/arch/x86/hpet.c     Thu Jun 17 08:53:12 2010 +0100
> +++ b/xen/arch/x86/hpet.c     Fri Jun 18 11:41:28 2010 +0800
> @@ -34,6 +34,17 @@ struct hpet_event_channel
>      int           shift;
>      s_time_t      next_event;
>      cpumask_t     cpumask;
> +    /*
> +     * cpumask_lock is used to prevent hpet intr handler from accessing 
> other
> +     * cpu's timer_deadline_start/end after the other cpu's mask was 
> cleared --
> +     * mask cleared means cpu waken up, then accessing timer_deadline_xxx 
> from
> +     * other cpu is not safe.
> +     * It is not used for protecting cpumask, so set ops needn't take it.
> +     * Multiple cpus clear cpumask simultaneously is ok due to the atomic
> +     * feature of cpu_clear, so hpet_broadcast_exit() can take read lock 
> for 
> +     * clearing cpumask, and handle_hpet_broadcast() have to take write 
> lock 
> +     * for read cpumask & access timer_deadline_xxx.
> +     */
>      rwlock_t      cpumask_lock;
>      spinlock_t    lock;
>      void          (*event_handler)(struct hpet_event_channel *);




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.