WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: [PATCH] CPUIDLE: shorten hpet spin_lock holding time

To: "Wei, Gang" <gang.wei@xxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] Re: [PATCH] CPUIDLE: shorten hpet spin_lock holding time
From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Date: Wed, 21 Apr 2010 10:25:47 +0100
Cc:
Delivery-date: Wed, 21 Apr 2010 02:26:55 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <F26D193E20BBDC42A43B611D1BDEDE710270AE42F1@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcrgS919AC3RHXCET4295IJeXmmgRwAO/3swAAIUDKAAASh2WgADOhngACIUzXUAAMoQoAAB26Rv
Thread-topic: [PATCH] CPUIDLE: shorten hpet spin_lock holding time
User-agent: Microsoft-Entourage/12.24.0.100205
On 21/04/2010 10:06, "Wei, Gang" <gang.wei@xxxxxxxxx> wrote:

>> It fixes the unsafe accesses to timer_deadline_{start,end} but I
>> still think this optimisation is misguided and also unsafe. There is
>> nothing to stop new CPUs being added to ch->cpumask after you start
>> scanning ch->cpumask. For example, some new CPU which has a
>> timer_deadline_end greater than ch->next_event, so it does not
>> reprogram the HPET. But handle_hpet_broadcast is already mid-scan and
>> misses this new CPU, so it does not reprogram the HPET either. Hence
>> no timer fires for the new CPU and it misses its deadline.
> 
> This will not happen. ch->next_event has already been set as STIME_MAX before
> start scanning ch->cpumask, so the new CPU with smallest timer_deadline_end
> will reprogram the HPET successfully.

Okay, then CPU A executes hpet_broadcast_enter() and programs the HPET
channel for its timeout X. Meanwhile concurrently executing
handle_hpet_broadcast misses CPU A but finds some other CPU B with timeout Y
much later than X, and erroneously programs the HPET channel with Y, causing
CPU A to miss its deadline by an arbitrary amount.

I dare say I can carry on finding races. :-)

> I think it is another story. Enlarging timer_slop is one way to aligned &
> reduce breakevents, it do have effects to save power and possibly bring larger
> latency. What I am trying to address here is how to reduce spin_lock overheads
> in idel entry/exit path. The spin_lock overheads along with other overheads in
> the system with 32pcpu/64vcpu caused >25% cpu utilization while all guest are
> idle.

So far it's looked to me like a correctness/performance tradeoff. :-D

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>