[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] cpuidle causing Dom0 soft lockups



Thanks for the refinement.

For the ASSERT, the reason is that this is runnable vcpu and it should be non 
urgent. Think about the vCPU changed from RUNSTATE_blocked/RUNSTATE_offline to 
RUNSTATE_runnable via vcpu_wake. vcpu_wake will call vcpu_runstate_change and 
in turn vcpu_urgent_count_update, the v->is_urgent will be updated accordingly. 
vcpu_wake is protected by scheduler lock, so it is atomic.

Best Regards
Ke

>-----Original Message-----
>From: Keir Fraser [mailto:keir.fraser@xxxxxxxxxxxxx]
>Sent: Tuesday, February 16, 2010 1:34 AM
>To: Yu, Ke; Jan Beulich
>Cc: Tian, Kevin; xen-devel@xxxxxxxxxxxxxxxxxxx
>Subject: Re: [Xen-devel] cpuidle causing Dom0 soft lockups
>
>Attached is a better version of your patch (I think). I haven't applied it
>because I don't see why the ASSERT() in sched_credit.c is correct. How do
>you know for sure that !v->is_urgent there (and therefore avoid urgent_count
>manipulation)?
>
> -- Keir
>
>On 13/02/2010 02:28, "Yu, Ke" <ke.yu@xxxxxxxxx> wrote:
>
>> Hi Jan,
>>
>> The attached is the updated patch per your suggestion. generally this patch
>> use the per-CPU urgent vCPU count to indicate if cpu should enter deep C
>> state. it introduce per-VCPU urgent flag, and update the urgent VCPU count
>> when vCPU state is changed. Could you please take a look. Thanks
>>
>> Regards
>> Ke
>>
>>> -----Original Message-----
>>> From: Jan Beulich [mailto:JBeulich@xxxxxxxxxx]
>>> Sent: Monday, February 08, 2010 5:08 PM
>>> To: Yu, Ke
>>> Cc: Keir Fraser; Tian, Kevin; xen-devel@xxxxxxxxxxxxxxxxxxx
>>> Subject: RE: [Xen-devel] cpuidle causing Dom0 soft lockups
>>>
>>>>>> "Yu, Ke" <ke.yu@xxxxxxxxx> 07.02.10 16:36 >>>
>>>> The attached is the updated patch, it has two changes
>>>> - change the logic from local irq disabled *and* poll event to local irq
>>> disabled *or* poll event
>>>
>>> Thanks.
>>>
>>>> - Use per-CPU vcpu list to iterate the VCPU, which is more scalable. The
>>> original scheduler does not provide such kind of list, so this patch
>>> implement
>>> the list in scheduler code.
>>>
>>> I'm still not really happy with that solution. I'd rather say that e.g.
>>> vcpu_sleep_nosync() should set a flag in the vcpu structure indicating
>>> whether that one is "urgent", and the scheduler should just maintain
>>> a counter of "urgent" vCPU-s per pCPU. Setting the flag when a vCPU
>>> is put to sleep guarantees that it won't be mis-treated if it got woken
>>> by the time acpi_processor_idle() looks at it (or at least the window
>>> would be minimal - not sure if it can be eliminated completely). Plus
>>> not having to traverse a list is certainly better for scalability, not the
>>> least since you're traversing a list (necessarily) including sleeping
>>> vCPU-s (i.e. the ones that shouldn't affect the performance/
>>> responsiveness of the system).
>>>
>>> But in the end it would certainly depend much more on Keir's view on
>>> it than on mine...
>>>
>>> Jan
>>


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.