[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/8] cpupools: fix state when downing a CPU failed



>>> On 16.07.18 at 13:47, <jgross@xxxxxxxx> wrote:
> On 16/07/18 11:17, Jan Beulich wrote:
>>>>> On 13.07.18 at 11:02, <jgross@xxxxxxxx> wrote:
>>> On 11/07/18 14:04, Jan Beulich wrote:
>>>> While I've run into the issue with further patches in place which no
>>>> longer guarantee the per-CPU area to start out as all zeros, the
>>>> CPU_DOWN_FAILED processing looks to have the same issue: By not zapping
>>>> the per-CPU cpupool pointer, cpupool_cpu_add()'s (indirect) invocation
>>>> of schedule_cpu_switch() will trigger the "c != old_pool" assertion
>>>> there.
>>>>
>>>> Clearing the field during CPU_DOWN_PREPARE is too early (afaict this
>>>> should not happen before cpu_disable_scheduler()). Clearing it in
>>>> CPU_DEAD and CPU_DOWN_FAILED would be an option, but would take the same
>>>> piece of code twice. Since the field's value shouldn't matter while the
>>>> CPU is offline, simply clear it in CPU_ONLINE and CPU_DOWN_FAILED, but
>>>> only for other than the suspend/resume case (which gets specially
>>>> handled in cpupool_cpu_remove()).
>>>>
>>>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>>>> ---
>>>> TBD: I think this would better call schedule_cpu_switch(cpu, NULL) from
>>>>      cpupool_cpu_remove(), but besides that - as per above - likely
>>>>      being too early, that function has further prereqs to be met. It
>>>>      also doesn't look as if cpupool_unassign_cpu_helper() could be used
>>>>      there.
>>>>
>>>> --- a/xen/common/cpupool.c
>>>> +++ b/xen/common/cpupool.c
>>>> @@ -778,6 +778,8 @@ static int cpu_callback(
>>>>      {
>>>>      case CPU_DOWN_FAILED:
>>>>      case CPU_ONLINE:
>>>> +        if ( system_state <= SYS_STATE_active )
>>>> +            per_cpu(cpupool, cpu) = NULL;
>>>>          rc = cpupool_cpu_add(cpu);
>>>
>>> Wouldn't it make more sense to clear the field in cpupool_cpu_add()
>>> which already is testing system_state?
>> 
>> Hmm, this may be a matter of taste: I consider the change done here
>> a prereq to calling the function in the first place. As said in the
>> description, I actually think this should come earlier, and it's just that
>> I can't see how to cleanly do so.

You didn't comment on this one at all, yet it matters for how a v2
is supposed to look like.

>>> Modifying the condition in cpupool_cpu_add() to
>>>
>>>   if ( system_state <= SYS_STATE_active )
>>>
>>> at the same time would have the benefit to catch problems in case
>>> suspending cpus is failing during SYS_STATE_suspend (I'd expect
>>> triggering the first ASSERT in schedule_cpu_switch() in this case).
>> 
>> You mean the if() there, not the else? If so - how would the "else"
>> body then ever be reached? IOW if anything I could only see the
>> "else" to become "else if ( system_state <= SYS_STATE_active )".
> 
> Bad wording on my side.
> 
> I should have written "the condition in cpupool_cpu_add() should match
> if ( system_state <= SYS_STATE_active )."
> 
> So: "if ( system_state > SYS_STATE_active )", as the test is for the
> other case.

I'd recommend against this, as someone adding a new SYS_STATE_*
past suspend/resume would quite likely miss this one. The strong
ordering of states imo should only be used for active and lower states.
But yes, I could see the if() there to become suspend || resume to
address the problem you describe.

Coming back to your DOWN_FAILED consideration: Why are you
thinking this can't happen during suspend? disable_nonboot_cpus()
uses plain cpu_down() after all.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.