WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: [PATCH] CPUIDLE: shorten hpet spin_lock holding time

To: "Wei, Gang" <gang.wei@xxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] Re: [PATCH] CPUIDLE: shorten hpet spin_lock holding time
From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Date: Wed, 21 Apr 2010 10:52:38 +0100
Cc:
Delivery-date: Wed, 21 Apr 2010 02:53:41 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <F26D193E20BBDC42A43B611D1BDEDE710270AE431D@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcrgS919AC3RHXCET4295IJeXmmgRwAO/3swAAIUDKAAASh2WgADOhngACIUzXUAAMoQoAAB26RvAAAUX9AAANuwEg==
Thread-topic: [PATCH] CPUIDLE: shorten hpet spin_lock holding time
User-agent: Microsoft-Entourage/12.24.0.100205
On 21/04/2010 10:36, "Wei, Gang" <gang.wei@xxxxxxxxx> wrote:

>> Okay, then CPU A executes hpet_broadcast_enter() and programs the HPET
>> channel for its timeout X. Meanwhile concurrently executing
>> handle_hpet_broadcast misses CPU A but finds some other CPU B with
>> timeout Y much later than X, and erroneously programs the HPET
>> channel with Y, causing CPU A to miss its deadline by an arbitrary
>> amount. 
> 
> It is also not possible. handle_hpet_broadcast reprogram HPET just while
> next_event < ch->next_event. Here next_evet is Y, and ch->next_event is X. :-)

Ah yes, I spotted that just after I replied! Okay I'm almost convinced now,
but...

>> I dare say I can carry on finding races. :-)
> 
> Please go on... The two racing conditions you mentioned were just considered
> before I resent the patch. :-D

Okay, one concern I still have is over possible races around
cpuidle_wakeup_mwait(). It makes use of a cpumask cpuidle_mwait_flags,
avoiding an IPI to cpus in the mask. However, there is nothing to stop the
CPU having cleared itself from that cpumask before cpuidle does the write to
softirq_pending. In that case, even assuming the CPU is now non-idle and so
wakeup is spurious, a subsequent attempt to raise_softirq(TIMER_SOFTIRQ)
will incorrectly not IPI because the flag is already set in softirq_pending?

This race would be benign in the current locking strategy (without your
patch) because the CPU that has left mwait_idle_with_hints() cannot get
further than hpet_broadcast_exit() because it will spin on ch->lock, and
there is a guaranteed check of softirq_pending at some point after that.
However your patch removes that synchronisation because ch->lock is not held
across cpuidle_wakeup_mwait().

What do you think to that?

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>