WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] cpuidle causing Dom0 soft lockups

To: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>, Jan Beulich <JBeulich@xxxxxxxxxx>
Subject: RE: [Xen-devel] cpuidle causing Dom0 soft lockups
From: "Yu, Ke" <ke.yu@xxxxxxxxx>
Date: Tue, 16 Feb 2010 12:59:42 +0800
Accept-language: en-US
Acceptlanguage: en-US
Cc: "Tian, Kevin" <kevin.tian@xxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Mon, 15 Feb 2010 21:00:22 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <C79F35F3.A1BA%keir.fraser@xxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <8B81FACE836F9248894A7844CC0BA8142B04198EDC@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx> <C79F35F3.A1BA%keir.fraser@xxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Acqonk9dHD7ZMlF8SsywDNho7VHZlADtTTNwAIRfbukAFhpjkA==
Thread-topic: [Xen-devel] cpuidle causing Dom0 soft lockups
Thanks for the refinement.

For the ASSERT, the reason is that this is runnable vcpu and it should be non 
urgent. Think about the vCPU changed from RUNSTATE_blocked/RUNSTATE_offline to 
RUNSTATE_runnable via vcpu_wake. vcpu_wake will call vcpu_runstate_change and 
in turn vcpu_urgent_count_update, the v->is_urgent will be updated accordingly. 
vcpu_wake is protected by scheduler lock, so it is atomic.

Best Regards
Ke

>-----Original Message-----
>From: Keir Fraser [mailto:keir.fraser@xxxxxxxxxxxxx]
>Sent: Tuesday, February 16, 2010 1:34 AM
>To: Yu, Ke; Jan Beulich
>Cc: Tian, Kevin; xen-devel@xxxxxxxxxxxxxxxxxxx
>Subject: Re: [Xen-devel] cpuidle causing Dom0 soft lockups
>
>Attached is a better version of your patch (I think). I haven't applied it
>because I don't see why the ASSERT() in sched_credit.c is correct. How do
>you know for sure that !v->is_urgent there (and therefore avoid urgent_count
>manipulation)?
>
> -- Keir
>
>On 13/02/2010 02:28, "Yu, Ke" <ke.yu@xxxxxxxxx> wrote:
>
>> Hi Jan,
>>
>> The attached is the updated patch per your suggestion. generally this patch
>> use the per-CPU urgent vCPU count to indicate if cpu should enter deep C
>> state. it introduce per-VCPU urgent flag, and update the urgent VCPU count
>> when vCPU state is changed. Could you please take a look. Thanks
>>
>> Regards
>> Ke
>>
>>> -----Original Message-----
>>> From: Jan Beulich [mailto:JBeulich@xxxxxxxxxx]
>>> Sent: Monday, February 08, 2010 5:08 PM
>>> To: Yu, Ke
>>> Cc: Keir Fraser; Tian, Kevin; xen-devel@xxxxxxxxxxxxxxxxxxx
>>> Subject: RE: [Xen-devel] cpuidle causing Dom0 soft lockups
>>>
>>>>>> "Yu, Ke" <ke.yu@xxxxxxxxx> 07.02.10 16:36 >>>
>>>> The attached is the updated patch, it has two changes
>>>> - change the logic from local irq disabled *and* poll event to local irq
>>> disabled *or* poll event
>>>
>>> Thanks.
>>>
>>>> - Use per-CPU vcpu list to iterate the VCPU, which is more scalable. The
>>> original scheduler does not provide such kind of list, so this patch
>>> implement
>>> the list in scheduler code.
>>>
>>> I'm still not really happy with that solution. I'd rather say that e.g.
>>> vcpu_sleep_nosync() should set a flag in the vcpu structure indicating
>>> whether that one is "urgent", and the scheduler should just maintain
>>> a counter of "urgent" vCPU-s per pCPU. Setting the flag when a vCPU
>>> is put to sleep guarantees that it won't be mis-treated if it got woken
>>> by the time acpi_processor_idle() looks at it (or at least the window
>>> would be minimal - not sure if it can be eliminated completely). Plus
>>> not having to traverse a list is certainly better for scalability, not the
>>> least since you're traversing a list (necessarily) including sleeping
>>> vCPU-s (i.e. the ones that shouldn't affect the performance/
>>> responsiveness of the system).
>>>
>>> But in the end it would certainly depend much more on Keir's view on
>>> it than on mine...
>>>
>>> Jan
>>


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel