WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Is hpet's cpumask_lock really needed?

To: "Gang Wei" <gang.wei@xxxxxxxxx>
Subject: [Xen-devel] Is hpet's cpumask_lock really needed?
From: "Jan Beulich" <JBeulich@xxxxxxxxxx>
Date: Thu, 24 Mar 2011 13:55:14 +0000
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Thu, 24 Mar 2011 06:55:04 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Jimmy,

I know I had asked about this before, prompting you to add an
extensive comment. Nevertheless, looking over the (few) rwlock
users, I'm now puzzled by the need for a lock here.

According to the comment in struct hpet_event_channel, this is to
prevent accessing a CPU's timer_deadline after it got cleared from
cpumask. I wonder, however, whether the whole thing couldn't
be done without lock altogether - hpet_broadcast_exit() could
just clear the bit, and handle_hpet_broadcast() could read
timer_deadline before looking at the mask a second time (the
cpumask bit was already found set by the surrounding loop, and
by the time "mask" and "next_event" get actually used they may
have become stale also with the current implementation).

Draft patch below.

Jan

--- a/xen/arch/x86/hpet.c
+++ b/xen/arch/x86/hpet.c
@@ -34,18 +34,6 @@ struct hpet_event_channel
     int           shift;
     s_time_t      next_event;
     cpumask_t     cpumask;
-    /*
-     * cpumask_lock is used to prevent hpet intr handler from accessing other
-     * cpu's timer_deadline after the other cpu's mask was cleared --
-     * mask cleared means cpu waken up, then accessing timer_deadline from
-     * other cpu is not safe.
-     * It is not used for protecting cpumask, so set ops needn't take it.
-     * Multiple cpus clear cpumask simultaneously is ok due to the atomic
-     * feature of cpu_clear, so hpet_broadcast_exit() can take read lock for 
-     * clearing cpumask, and handle_hpet_broadcast() have to take write lock 
-     * for read cpumask & access timer_deadline.
-     */
-    rwlock_t      cpumask_lock;
     spinlock_t    lock;
     void          (*event_handler)(struct hpet_event_channel *);
 
@@ -199,17 +187,18 @@ again:
     /* find all expired events */
     for_each_cpu_mask(cpu, ch->cpumask)
     {
-        write_lock_irq(&ch->cpumask_lock);
+        s_time_t deadline;
 
-        if ( cpu_isset(cpu, ch->cpumask) )
-        {
-            if ( per_cpu(timer_deadline, cpu) <= now )
-                cpu_set(cpu, mask);
-            else if ( per_cpu(timer_deadline, cpu) < next_event )
-                next_event = per_cpu(timer_deadline, cpu);
-        }
+        rmb();
+        deadline = per_cpu(timer_deadline, cpu);
+        rmb();
+        if ( !cpu_isset(cpu, ch->cpumask) )
+            continue;
 
-        write_unlock_irq(&ch->cpumask_lock);
+        if ( deadline <= now )
+            cpu_set(cpu, mask);
+        else if ( deadline < next_event )
+            next_event = deadline;
     }
 
     /* wakeup the cpus which have an expired event. */
@@ -602,7 +591,6 @@ void __init hpet_broadcast_init(void)
         hpet_events[i].shift = 32;
         hpet_events[i].next_event = STIME_MAX;
         spin_lock_init(&hpet_events[i].lock);
-        rwlock_init(&hpet_events[i].cpumask_lock);
         wmb();
         hpet_events[i].event_handler = handle_hpet_broadcast;
     }
@@ -729,9 +717,7 @@ void hpet_broadcast_exit(void)
     if ( !reprogram_timer(per_cpu(timer_deadline, cpu)) )
         raise_softirq(TIMER_SOFTIRQ);
 
-    read_lock_irq(&ch->cpumask_lock);
     cpu_clear(cpu, ch->cpumask);
-    read_unlock_irq(&ch->cpumask_lock);
 
     if ( !(ch->flags & HPET_EVT_LEGACY) )
     {



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>