[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 5/6] x86/smp: use a dedicated scratch cpumask in send_IPI_mask


  • To: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Tue, 18 Feb 2020 13:29:56 +0000
  • Authentication-results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=andrew.cooper3@xxxxxxxxxx; spf=Pass smtp.mailfrom=Andrew.Cooper3@xxxxxxxxxx; spf=None smtp.helo=postmaster@xxxxxxxxxxxxxxx
  • Autocrypt: addr=andrew.cooper3@xxxxxxxxxx; prefer-encrypt=mutual; keydata= mQINBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABtClBbmRyZXcgQ29v cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPokCOgQTAQgAJAIbAwULCQgHAwUVCgkI CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt TQTBLzDKXok86LkCDQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAYkC HwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs 6+ahAA==
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx, Wei Liu <wl@xxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, Sander Eikelenboom <linux@xxxxxxxxxxxxxx>
  • Delivery-date: Tue, 18 Feb 2020 13:30:04 +0000
  • Ironport-sdr: MwlJW4Mp3QgsE7inaK9R7Ev3ejOFE0NYQl5WVv8MRfwMOKl/PMpF695/0zDWtlWrzU8PrI3jfn HPOomTH/OoPC4152qTDlfqZbA/+N5QKOqrxfzL+QHgo+wrtZjBsfE3dlZJrVKslMq5lXbxFvOj 7Dyf0qu/g88dPWRsAOu0yW0ZbTOw1QRJHYaMB07ANpBs1OXAj0yeLSlHlygu8EqcoT/B8K7Aso NAjdcBQ4AYsEkS1kYe7TodOIIw0KW7GNNjgq4Q41ikvG0Q9FjzYHiEO3uqukq6pdMTcJTbECP3 GM4=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Openpgp: preference=signencrypt

On 18/02/2020 11:46, Roger Pau Monné wrote:
> On Tue, Feb 18, 2020 at 11:35:37AM +0000, Andrew Cooper wrote:
>>
>> On 18/02/2020 11:22, Roger Pau Monné wrote:
>>> On Tue, Feb 18, 2020 at 11:21:12AM +0000, Andrew Cooper wrote:
>>>> On 18/02/2020 11:10, Roger Pau Monné wrote:
>>>>> On Tue, Feb 18, 2020 at 10:53:45AM +0000, Andrew Cooper wrote:
>>>>>> On 17/02/2020 18:43, Roger Pau Monne wrote:
>>>>>>> @@ -67,7 +68,20 @@ static void send_IPI_shortcut(unsigned int shortcut, 
>>>>>>> int vector,
>>>>>>>  void send_IPI_mask(const cpumask_t *mask, int vector)
>>>>>>>  {
>>>>>>>      bool cpus_locked = false;
>>>>>>> -    cpumask_t *scratch = this_cpu(scratch_cpumask);
>>>>>>> +    cpumask_t *scratch = this_cpu(send_ipi_cpumask);
>>>>>>> +    unsigned long flags;
>>>>>>> +
>>>>>>> +    if ( in_mc() || in_nmi() )
>>>>>>> +    {
>>>>>>> +        /*
>>>>>>> +         * When in #MC or #MNI context Xen cannot use the per-CPU 
>>>>>>> scratch mask
>>>>>>> +         * because we have no way to avoid reentry, so do not use the 
>>>>>>> APIC
>>>>>>> +         * shorthand.
>>>>>>> +         */
>>>>>>> +        alternative_vcall(genapic.send_IPI_mask, mask, vector);
>>>>>>> +        return;
>>>>>> The set of things you can safely do in an NMI/MCE handler is small, and
>>>>>> does not include sending IPIs.  (In reality, if you're using x2apic, it
>>>>>> is safe to send an IPI because there is no risk of clobbering ICR2
>>>>>> behind your outer context's back).
>>>>>>
>>>>>> However, if we escalate from NMI/MCE context into crash context, then
>>>>>> anything goes.  In reality, we only ever send NMIs from the crash path,
>>>>>> and that is not permitted to use a shorthand, making this code dead.
>>>>> This was requested by Jan, as safety measure
>>>> That may be, but it doesn't mean it is correct.  If execution ever
>>>> enters this function in NMI/MCE context, there is a real,
>>>> state-corrupting bug, higher up the call stack.
>>> Ack, then I guess we should just BUG() here if ever called from #NMI
>>> or #MC context?
>> Well.  There is a reason I suggested removing it, and not using BUG().
>>
>> If NMI/MCE context escalates to crash context, we do need to send NMIs. 
>> It won't be this function specifically, but it will be part of the
>> general IPI infrastructure.
>>
>> We definitely don't want to get into the game of trying to clobber each
>> of the state variables, so the only thing throwing BUG()'s around in
>> this area will do is make the crash path more fragile.
> I see, panicking in such context will just clobber the previous crash
> happened in NMI/MC context.
>
> So you would rather keep the current version of falling back to the
> usage of the non-shorthand IPI sending routine instead of panicking?
>
> What about:
>
> if ( in_mc() || in_nmi() )
> {
>     /*
>      * When in #MC or #MNI context Xen cannot use the per-CPU scratch mask
>      * because we have no way to avoid reentry, so do not use the APIC
>      * shorthand. The only IPI that should be sent from such context
>      * is a #NMI to shutdown the system in case of a crash.
>      */
>     if ( vector == APIC_DM_NMI )
>       alternative_vcall(genapic.send_IPI_mask, mask, vector);
>     else
>         BUG();
>
>     return;
> }

How do you intent to test it?

It might be correct now[*] but it doesn't protect against someone
modifying code, violating the constraint, and this going unnoticed
because the above codepath will only be entered in exceptional
circumstances.  Sods law says that code inside that block is first going
to be tested in a customer environment.

ASSERT()s would be less bad, but any technical countermeasures, however
well intentioned, get in the way of the crash path functioning when it
matters most.

~Andrew

[*] There is a long outstanding bug in machine_restart() which blindly
enables interrupts and IPIs CPU 0.  You can get here in the middle of a
crash, and this BUG() will trigger in at least one case I've seen before.

Fixing this isn't a 5 minute job, and it hasn't bubbled sufficiently up
my TODO list yet.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.