[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC 01/49] xen/sched: call cpu_disable_scheduler() via cpu notifier


  • To: Julien Grall <julien.grall@xxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • From: Juergen Gross <jgross@xxxxxxxx>
  • Date: Mon, 1 Apr 2019 18:00:03 +0200
  • Autocrypt: addr=jgross@xxxxxxxx; prefer-encrypt=mutual; keydata= mQENBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOB ycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJve dYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJ NwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvx XP3FAp2pkW0xqG7/377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEB AAG0H0p1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT6JATkEEwECACMFAlOMcK8CGwMH CwkIBwMCAQYVCAIJCgsEFgIDAQIeAQIXgAAKCRCw3p3WKL8TL8eZB/9G0juS/kDY9LhEXseh mE9U+iA1VsLhgDqVbsOtZ/S14LRFHczNd/Lqkn7souCSoyWsBs3/wO+OjPvxf7m+Ef+sMtr0 G5lCWEWa9wa0IXx5HRPW/ScL+e4AVUbL7rurYMfwCzco+7TfjhMEOkC+va5gzi1KrErgNRHH kg3PhlnRY0Udyqx++UYkAsN4TQuEhNN32MvN0Np3WlBJOgKcuXpIElmMM5f1BBzJSKBkW0Jc Wy3h2Wy912vHKpPV/Xv7ZwVJ27v7KcuZcErtptDevAljxJtE7aJG6WiBzm+v9EswyWxwMCIO RoVBYuiocc51872tRGywc03xaQydB+9R7BHPuQENBFOMcBYBCADLMfoA44MwGOB9YT1V4KCy vAfd7E0BTfaAurbG+Olacciz3yd09QOmejFZC6AnoykydyvTFLAWYcSCdISMr88COmmCbJzn sHAogjexXiif6ANUUlHpjxlHCCcELmZUzomNDnEOTxZFeWMTFF9Rf2k2F0Tl4E5kmsNGgtSa aMO0rNZoOEiD/7UfPP3dfh8JCQ1VtUUsQtT1sxos8Eb/HmriJhnaTZ7Hp3jtgTVkV0ybpgFg w6WMaRkrBh17mV0z2ajjmabB7SJxcouSkR0hcpNl4oM74d2/VqoW4BxxxOD1FcNCObCELfIS auZx+XT6s+CE7Qi/c44ibBMR7hyjdzWbABEBAAGJAR8EGAECAAkFAlOMcBYCGwwACgkQsN6d 1ii/Ey9D+Af/WFr3q+bg/8v5tCknCtn92d5lyYTBNt7xgWzDZX8G6/pngzKyWfedArllp0Pn fgIXtMNV+3t8Li1Tg843EXkP7+2+CQ98MB8XvvPLYAfW8nNDV85TyVgWlldNcgdv7nn1Sq8g HwB2BHdIAkYce3hEoDQXt/mKlgEGsLpzJcnLKimtPXQQy9TxUaLBe9PInPd+Ohix0XOlY+Uk QFEx50Ki3rSDl2Zt2tnkNYKUCvTJq7jvOlaPd6d/W0tZqpyy7KVay+K4aMobDsodB3dvEAs6 ScCnh03dDAFgIq5nsB11j3KPKdVoPlfucX2c7kGNH+LUMbzqV6beIENfNexkOfxHf4kBrQQY AQgAIBYhBIUSZ3Lo9gSUpdCX97DendYovxMvBQJa3fDQAhsCAIEJELDendYovxMvdiAEGRYI AB0WIQRTLbB6QfY48x44uB6AXGG7T9hjvgUCWt3w0AAKCRCAXGG7T9hjvk2LAP99B/9FenK/ 1lfifxQmsoOrjbZtzCS6OKxPqOLHaY47BgEAqKKn36YAPpbk09d2GTVetoQJwiylx/Z9/mQI CUbQMg1pNQf9EjA1bNcMbnzJCgt0P9Q9wWCLwZa01SnQWFz8Z4HEaKldie+5bHBL5CzVBrLv 81tqX+/j95llpazzCXZW2sdNL3r8gXqrajSox7LR2rYDGdltAhQuISd2BHrbkQVEWD4hs7iV 1KQHe2uwXbKlguKPhk5ubZxqwsg/uIHw0qZDk+d0vxjTtO2JD5Jv/CeDgaBX4Emgp0NYs8IC UIyKXBtnzwiNv4cX9qKlz2Gyq9b+GdcLYZqMlIBjdCz0yJvgeb3WPNsCOanvbjelDhskx9gd 6YUUFFqgsLtrKpCNyy203a58g2WosU9k9H+LcheS37Ph2vMVTISMszW9W8gyORSgmw==
  • Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wei.liu2@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Dario Faggioli <dfaggioli@xxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Delivery-date: Mon, 01 Apr 2019 16:00:16 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Openpgp: preference=signencrypt

On 01/04/2019 17:15, Julien Grall wrote:
> Hi,
> 
> On 4/1/19 3:23 PM, Juergen Gross wrote:
>> On 01/04/2019 16:01, Julien Grall wrote:
>>> Hi,
>>>
>>> On 4/1/19 2:33 PM, Juergen Gross wrote:
>>>> On 01/04/2019 15:21, Julien Grall wrote:
>>>>> Hi Juergen,
>>>>>
>>>>> On 4/1/19 11:37 AM, Juergen Gross wrote:
>>>>>> On 01/04/2019 12:29, Julien Grall wrote:
>>>>>>> Hi,
>>>>>>>
>>>>>>> On 4/1/19 10:40 AM, Juergen Gross wrote:
>>>>>>>> On 01/04/2019 11:21, Julien Grall wrote:
>>>>>>>>> Hi,
>>>>>>>>>
>>>>>>>>> On 3/29/19 3:08 PM, Juergen Gross wrote:
>>>>>>>>>> cpu_disable_scheduler() is being called from __cpu_disable()
>>>>>>>>>> today.
>>>>>>>>>> There is no need to execute it on the cpu just being disabled,
>>>>>>>>>> so use
>>>>>>>>>> the CPU_DEAD case of the cpu notifier chain. Moving the call
>>>>>>>>>> out of
>>>>>>>>>> stop_machine() context is fine, as we just need to hold the
>>>>>>>>>> domain
>>>>>>>>>> RCU
>>>>>>>>>> lock and need the scheduler percpu data to be still allocated.
>>>>>>>>>>
>>>>>>>>>> Add another hook for CPU_DOWN_PREPARE to bail out early in case
>>>>>>>>>> cpu_disable_scheduler() would fail. This will avoid crashes in
>>>>>>>>>> rare
>>>>>>>>>> cases for cpu hotplug or suspend.
>>>>>>>>>>
>>>>>>>>>> While at it remove a superfluous smp_mb() in the ARM
>>>>>>>>>> __cpu_disable()
>>>>>>>>>> incarnation.
>>>>>>>>>
>>>>>>>>> This is not obvious why the smp_mb() is superfluous. Can you
>>>>>>>>> please
>>>>>>>>> provide more details on why this is not necessary?
>>>>>>>>
>>>>>>>> cpumask_clear_cpu() should already have the needed semantics, no?
>>>>>>>> It is based on clear_bit() which is defined to be atomic.
>>>>>>>
>>>>>>> atomicity does not mean the store/load cannot be re-ordered by the
>>>>>>> CPU.
>>>>>>> You would need a barrier to prevent re-ordering.
>>>>>>>
>>>>>>> cpumask_clear_cpu() and clear_bit() does not contain any barrier, so
>>>>>>> store/load can be re-ordered.
>>>>>>
>>>>>> Uh, couldn't this lead to problems, e.g. in vcpu_block()? The comment
>>>>>> there suggests the sequence of setting the blocked bit and doing the
>>>>>> test is important for avoiding a race...
>>>>>
>>>>> Hmmm... looking at the other usage (such as in do_poll), on non-x86
>>>>> platform, there is a smp_mb() between set_bit(...) and checking the
>>>>> event with a similar comment above.
>>>>>
>>>>> I don't know enough the scheduler code to know why the barrier is
>>>>> needed. But for consistency, it seems to me the smp_mb() would be
>>>>> required in vcpu_block() as well.
>>>>>
>>>>> Also, it is quite interesting that the barrier is not presence for
>>>>> x86.
>>>>> If I understand correctly the comment on top of set_bit/clear_bit, it
>>>>> could as well be re-ordered. So we seem to relying on the underlying
>>>>> implementation of set_bit/clear_bit.
>>>>
>>>> On x86 reads and writes can't be reordered with locked operations (SDM
>>>> Vol 3 8.2.2). So the barrier is really not needed AFAIU.
>>>>
>>>> include/asm-x86/bitops.h:
>>>>
>>>>    * clear_bit() is atomic and may not be reordered.
>>>
>>> I interpreted the "may not" as you should not rely on the re-ordering to
>>> not happen.
>>>
>>> In place were re-ordering should not happen (e.g test_and_set_bit) we
>>> use the wording "cannot".
>>
>> The SDM is very clear here:
>>
>> "Reads or writes cannot be reordered with I/O instructions, locked
>>   instructions, or serializing instructions."
> 
> This is what the specification says not the intended semantic. Helper
> may have a more relaxed semantics to accommodate other architecture.
> 
> I believe, this is the case here. The semantic is more relaxed than the
> implementation. So you don't have to impose a barrier in architecture
> with a more relaxed memory ordering.
> 
>>
>>>>> Wouldn't it make sense to try to uniformize the semantics? Maybe by
>>>>> introducing a new helper?
>>>>
>>>> Or adding the barrier on ARM for the atomic operations?
>>>
>>> On which basis?  Why should we impact every users for fixing a bug in
>>> the scheduler?
>>
>> I'm assuming there are more places like this either in common code or
>> code copied verbatim from arch/x86 to arch/arm with that problem.
> 
> Adding it in the *_set helpers is just the poor's man fix. If we do that
> this is going to stick for a long time and impact performance.
> 
> Instead we should fix the scheduler code (and hopefully only that) where
> the ordering is necessary.

I believe that should be a patch on its own. Are you doing that?

>> So I take it you'd rather let me add that smp_mb() in __cpu_disable()
>> again.
> 
> Removing/Adding barriers should be accompanied with a proper
> justifications in the commit message. Additionally, new barrier should
> have a comment explaining what they are for.
> 
> In this case, I don't know what is the correct answer. It feels to me we
> should keep it until we have a better understanding of this code. But

Okay.

> then it raises the question whether a barrier would also be necessary
> after calling cpu_disable_scheduler().

That one is quite easy: all paths of cpu_disable_scheduler() are doing
an unlock operation at the end, so the barrier is already there.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.