[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/8] cpupools: fix state when downing a CPU failed


  • To: Jan Beulich <JBeulich@xxxxxxxx>
  • From: Juergen Gross <jgross@xxxxxxxx>
  • Date: Mon, 16 Jul 2018 16:21:32 +0200
  • Autocrypt: addr=jgross@xxxxxxxx; prefer-encrypt=mutual; keydata= xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOB ycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJve dYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJ NwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvx XP3FAp2pkW0xqG7/377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEB AAHNHkp1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmRlPsLAeQQTAQIAIwUCU4xw6wIbAwcL CQgHAwIBBhUIAgkKCwQWAgMBAh4BAheAAAoJELDendYovxMvi4UH/Ri+OXlObzqMANruTd4N zmVBAZgx1VW6jLc8JZjQuJPSsd/a+bNr3BZeLV6lu4Pf1Yl2Log129EX1KWYiFFvPbIiq5M5 kOXTO8Eas4CaScCvAZ9jCMQCgK3pFqYgirwTgfwnPtxFxO/F3ZcS8jovza5khkSKL9JGq8Nk czDTruQ/oy0WUHdUr9uwEfiD9yPFOGqp4S6cISuzBMvaAiC5YGdUGXuPZKXLpnGSjkZswUzY d9BVSitRL5ldsQCg6GhDoEAeIhUC4SQnT9SOWkoDOSFRXZ+7+WIBGLiWMd+yKDdRG5RyP/8f 3tgGiB6cyuYfPDRGsELGjUaTUq3H2xZgIPfOwE0EU4xwFgEIAMsx+gDjgzAY4H1hPVXgoLK8 B93sTQFN9oC6tsb46VpxyLPfJ3T1A6Z6MVkLoCejKTJ3K9MUsBZhxIJ0hIyvzwI6aYJsnOew cCiCN7FeKJ/oA1RSUemPGUcIJwQuZlTOiY0OcQ5PFkV5YxMUX1F/aTYXROXgTmSaw0aC1Jpo w7Ss1mg4SIP/tR88/d1+HwkJDVW1RSxC1PWzGizwRv8eauImGdpNnseneO2BNWRXTJumAWDD pYxpGSsGHXuZXTPZqOOZpsHtInFyi5KRHSFyk2Xigzvh3b9WqhbgHHHE4PUVw0I5sIQt8hJq 5nH5dPqz4ITtCL9zjiJsExHuHKN3NZsAEQEAAcLAXwQYAQIACQUCU4xwFgIbDAAKCRCw3p3W KL8TL0P4B/9YWver5uD/y/m0KScK2f3Z3mXJhME23vGBbMNlfwbr+meDMrJZ950CuWWnQ+d+ Ahe0w1X7e3wuLVODzjcReQ/v7b4JD3wwHxe+88tgB9byc0NXzlPJWBaWV01yB2/uefVKryAf AHYEd0gCRhx7eESgNBe3+YqWAQawunMlycsqKa09dBDL1PFRosF708ic9346GLHRc6Vj5SRA UTHnQqLetIOXZm3a2eQ1gpQK9MmruO86Vo93p39bS1mqnLLspVrL4rhoyhsOyh0Hd28QCzpJ wKeHTd0MAWAirmewHXWPco8p1Wg+V+5xfZzuQY0f4tQxvOpXpt4gQ1817GQ5/Ed/wsDtBBgB CAAgFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAlrd8NACGwIAgQkQsN6d1ii/Ey92IAQZFggA HRYhBFMtsHpB9jjzHji4HoBcYbtP2GO+BQJa3fDQAAoJEIBcYbtP2GO+TYsA/30H/0V6cr/W V+J/FCayg6uNtm3MJLo4rE+o4sdpjjsGAQCooqffpgA+luTT13YZNV62hAnCLKXH9n3+ZAgJ RtAyDWk1B/0SMDVs1wxufMkKC3Q/1D3BYIvBlrTVKdBYXPxngcRoqV2J77lscEvkLNUGsu/z W2pf7+P3mWWlrPMJdlbax00vevyBeqtqNKjHstHatgMZ2W0CFC4hJ3YEetuRBURYPiGzuJXU pAd7a7BdsqWC4o+GTm5tnGrCyD+4gfDSpkOT53S/GNO07YkPkm/8J4OBoFfgSaCnQ1izwgJQ jIpcG2fPCI2/hxf2oqXPYbKr1v4Z1wthmoyUgGN0LPTIm+B5vdY82wI5qe9uN6UOGyTH2B3p hRQUWqCwu2sqkI3LLbTdrnyDZaixT2T0f4tyF5Lfs+Ha8xVMhIyzNb1byDI5FKCb
  • Cc: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Dario Faggioli <dfaggioli@xxxxxxxx>
  • Delivery-date: Mon, 16 Jul 2018 14:21:44 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Openpgp: preference=signencrypt

On 16/07/18 15:01, Jan Beulich wrote:
>>>> On 16.07.18 at 14:47, <jgross@xxxxxxxx> wrote:
>> On 16/07/18 14:19, Jan Beulich wrote:
>>>>>> On 16.07.18 at 13:47, <jgross@xxxxxxxx> wrote:
>>>> On 16/07/18 11:17, Jan Beulich wrote:
>>>>>>>> On 13.07.18 at 11:02, <jgross@xxxxxxxx> wrote:
>>>>>> On 11/07/18 14:04, Jan Beulich wrote:
>>>>>>> While I've run into the issue with further patches in place which no
>>>>>>> longer guarantee the per-CPU area to start out as all zeros, the
>>>>>>> CPU_DOWN_FAILED processing looks to have the same issue: By not zapping
>>>>>>> the per-CPU cpupool pointer, cpupool_cpu_add()'s (indirect) invocation
>>>>>>> of schedule_cpu_switch() will trigger the "c != old_pool" assertion
>>>>>>> there.
>>>>>>>
>>>>>>> Clearing the field during CPU_DOWN_PREPARE is too early (afaict this
>>>>>>> should not happen before cpu_disable_scheduler()). Clearing it in
>>>>>>> CPU_DEAD and CPU_DOWN_FAILED would be an option, but would take the same
>>>>>>> piece of code twice. Since the field's value shouldn't matter while the
>>>>>>> CPU is offline, simply clear it in CPU_ONLINE and CPU_DOWN_FAILED, but
>>>>>>> only for other than the suspend/resume case (which gets specially
>>>>>>> handled in cpupool_cpu_remove()).
>>>>>>>
>>>>>>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>>>>>>> ---
>>>>>>> TBD: I think this would better call schedule_cpu_switch(cpu, NULL) from
>>>>>>>      cpupool_cpu_remove(), but besides that - as per above - likely
>>>>>>>      being too early, that function has further prereqs to be met. It
>>>>>>>      also doesn't look as if cpupool_unassign_cpu_helper() could be used
>>>>>>>      there.
>>>>>>>
>>>>>>> --- a/xen/common/cpupool.c
>>>>>>> +++ b/xen/common/cpupool.c
>>>>>>> @@ -778,6 +778,8 @@ static int cpu_callback(
>>>>>>>      {
>>>>>>>      case CPU_DOWN_FAILED:
>>>>>>>      case CPU_ONLINE:
>>>>>>> +        if ( system_state <= SYS_STATE_active )
>>>>>>> +            per_cpu(cpupool, cpu) = NULL;
>>>>>>>          rc = cpupool_cpu_add(cpu);
>>>>>>
>>>>>> Wouldn't it make more sense to clear the field in cpupool_cpu_add()
>>>>>> which already is testing system_state?
>>>>>
>>>>> Hmm, this may be a matter of taste: I consider the change done here
>>>>> a prereq to calling the function in the first place. As said in the
>>>>> description, I actually think this should come earlier, and it's just that
>>>>> I can't see how to cleanly do so.
>>>
>>> You didn't comment on this one at all, yet it matters for how a v2
>>> is supposed to look like.
>>
>> My comment was thought to address this question, too. cpupool_cpu_add()
>> is handling the special case of resuming explicitly, where the old cpu
>> assignment to a cpupool is kept. So I believe setting
>>   per_cpu(cpupool, cpu) = NULL
>> in the else clause of cpupool_cpu_add() only is better.
> 
> Well, okay then. You're the maintainer.
> 
>>>>>> Modifying the condition in cpupool_cpu_add() to
>>>>>>
>>>>>>   if ( system_state <= SYS_STATE_active )
>>>>>>
>>>>>> at the same time would have the benefit to catch problems in case
>>>>>> suspending cpus is failing during SYS_STATE_suspend (I'd expect
>>>>>> triggering the first ASSERT in schedule_cpu_switch() in this case).
>>>>>
>>>>> You mean the if() there, not the else? If so - how would the "else"
>>>>> body then ever be reached? IOW if anything I could only see the
>>>>> "else" to become "else if ( system_state <= SYS_STATE_active )".
>>>>
>>>> Bad wording on my side.
>>>>
>>>> I should have written "the condition in cpupool_cpu_add() should match
>>>> if ( system_state <= SYS_STATE_active )."
>>>>
>>>> So: "if ( system_state > SYS_STATE_active )", as the test is for the
>>>> other case.
>>>
>>> I'd recommend against this, as someone adding a new SYS_STATE_*
>>> past suspend/resume would quite likely miss this one. The strong
>>> ordering of states imo should only be used for active and lower states.
>>> But yes, I could see the if() there to become suspend || resume to
>>> address the problem you describe.
>>
>> Yes, this would seem to be a better choice here.
>>
>>> Coming back to your DOWN_FAILED consideration: Why are you
>>> thinking this can't happen during suspend? disable_nonboot_cpus()
>>> uses plain cpu_down() after all.
>>
>> Right.
>>
>> DOWN_FAILED is used only once, and that is in cpu_down() after the step
>> CPU_DOWN_PREPARE returned an error. And CPU_DOWN_PREPARE is only used
>> for cpufreq driver where it never returns an error, and for cpupools
>> which don't matter here, as only other components failing at step
>> CPU_DOWN_PREPARE would lead to calling cpupool/DOWN_FAILED.
> 
> What about the stop_machine_run() failure case?

Oh. No idea how I missed that.

So maybe changing the condition in cpupool_cpu_add() should be split out
into a patch of its own in order to be able to backport it?


Juergen


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.