[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] cpuidle causing Dom0 soft lockups



>>> "Tian, Kevin" <kevin.tian@xxxxxxxxx> 05.02.10 10:00 >>>
>>From: Jan Beulich [mailto:JBeulich@xxxxxxxxxx] 
>>Sent: 2010å2æ5æ 16:49
>>
>>Yes, this patch works for us too. So a non-hacky version of it would be
>>appreciated.
>>
>>I also meanwhile tried out the idea to reduce the contention on
>>xtime_lock (attached for reference). Things appear to work fine, but
>>there is an obvious problem (with - thus far to me - no obvious
>>explanation) with it: The number of timer interrupts on CPUs not on
>>duty to run do_timer() and alike is increasing significantly, 
>>with spikes
>>of over 100,000 per second. I'm investigating this, but of course any
>>idea anyone of you might have what could be causing this would be
>>very welcome.
>>
>
>forgive my poor english. From your patch, only cpu on duty will invoke
>do_timer to update global timestamp. Why in your test it's CPUs 'not
>on duty' to have high frequent do_timer? I may read it wrong. :-(

If you look at the patch, I added extra statistics for those timer
interrupts that occur when a CPU is "on duty" (recorded as IRQ0,
which is otherwise unused) and when not "on duty" (recorded as MCEs,
since those hopefully(!!!) won't occur either, and in no case at a high
rate).

>From that I know that the rate of interrupts (not the rate of do_timer()
invocations) is much higher on not-on-duty CPUs, but is roughly as
without the patch for the on-duty one.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.