[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 2/2] xen: merge temporary vcpu pinning scenarios


  • To: Juergen Gross <jgross@xxxxxxxx>, Jan Beulich <JBeulich@xxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Fri, 26 Jul 2019 12:50:38 +0100
  • Authentication-results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=andrew.cooper3@xxxxxxxxxx; spf=Pass smtp.mailfrom=Andrew.Cooper3@xxxxxxxxxx; spf=None smtp.helo=postmaster@xxxxxxxxxxxxxxx
  • Autocrypt: addr=andrew.cooper3@xxxxxxxxxx; prefer-encrypt=mutual; keydata= mQINBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABtClBbmRyZXcgQ29v cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPokCOgQTAQgAJAIbAwULCQgHAwUVCgkI CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt TQTBLzDKXok86LkCDQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAYkC HwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs 6+ahAA==
  • Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>, WeiLiu <wl@xxxxxxx>, Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>, George Dunlap <George.Dunlap@xxxxxxxxxxxxx>, Tim Deegan <tim@xxxxxxx>, Ian Jackson <ian.jackson@xxxxxxxxxxxxx>, Dario Faggioli <dfaggioli@xxxxxxxx>, Julien Grall <julien.grall@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Fri, 26 Jul 2019 11:50:53 +0000
  • Ironport-sdr: kMSuuOWle3q+T8xoUCeapXgQW2qF8UlhRCGHiaoFfiuOXMCC+SrUDk5l35iwwpgsPSmYS+cNxl 8uSlDwFc1Sd/he8SKc3/uy5vrv45mOsxNgM1dfyZSJb5g/4C+zTrsetzyNH1DBK2k3dwue7aiA EVM4cqCH/3hhsmXxukaRn/w8qiPwBLwoCN5Quv/dftad121hT+MVlLk2k+dgCWrfcrgcouP/WM 2i1Sm+lsA+AWKsFiRgO6coWnTm1uDXK/sKY7DsQTwcCMnFDMM3XdYIjz5NptP5lCT37m1QyvvR K9o=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Openpgp: preference=signencrypt

On 26/07/2019 12:42, Juergen Gross wrote:
> On 26.07.19 13:35, Jan Beulich wrote:
>> On 26.07.2019 11:46, Andrew Cooper wrote:
>>> On 24/07/2019 12:26, Juergen Gross wrote:
>>>> @@ -182,30 +178,24 @@ static void __prepare_to_wait(struct
>>>> waitqueue_vcpu *wqv)
>>>>    static void __finish_wait(struct waitqueue_vcpu *wqv)
>>>>    {
>>>>        wqv->esp = NULL;
>>>> -    (void)vcpu_set_hard_affinity(current, &wqv->saved_affinity);
>>>> +    vcpu_temporary_affinity(current, NR_CPUS, VCPU_AFFINITY_WAIT);
>>>>    }
>>>>       void check_wakeup_from_wait(void)
>>>>    {
>>>> -    struct waitqueue_vcpu *wqv = current->waitqueue_vcpu;
>>>> +    struct vcpu *curr = current;
>>>> +    struct waitqueue_vcpu *wqv = curr->waitqueue_vcpu;
>>>>           ASSERT(list_empty(&wqv->list));
>>>>           if ( likely(wqv->esp == NULL) )
>>>>            return;
>>>>    -    /* Check if we woke up on the wrong CPU. */
>>>> -    if ( unlikely(smp_processor_id() != wqv->wakeup_cpu) )
>>>> +    /* Check if we are still pinned. */
>>>> +    if ( unlikely(!(curr->affinity_broken & VCPU_AFFINITY_WAIT)) )
>>>>        {
>>>> -        /* Re-set VCPU affinity and re-enter the scheduler. */
>>>> -        struct vcpu *curr = current;
>>>> -        cpumask_copy(&wqv->saved_affinity, curr->cpu_hard_affinity);
>>>> -        if ( vcpu_set_hard_affinity(curr,
>>>> cpumask_of(wqv->wakeup_cpu)) )
>>>> -        {
>>>> -            gdprintk(XENLOG_ERR, "Unable to set vcpu affinity\n");
>>>> -            domain_crash(current->domain);
>>>> -        }
>>>> -        wait(); /* takes us back into the scheduler */
>>>> +        gdprintk(XENLOG_ERR, "vcpu affinity lost\n");
>>>> +        domain_crash(curr->domain);
>>>>        }
>>>
>>> I'm sorry to retract my R-by after the fact, but I've only just noticed
>>> (while rebasing some of my pending work over this) that it is buggy.
>>>
>>> The reason wait() was called is because it is not safe to leave that
>>> if() clause.
>>>
>>> With this change in place, we'll arrange for the VM to be crashed, then
>>> longjump back into the stack from from the waiting vCPU, on the wrong
>>> CPU.  Any caller with smp_processor_id() or thread-local variables
>>> cache
>>> by pointer on the stack will then cause memory corruption.
>>>
>>> Its not immediately obvious how to fix this, but bear in mind that as
>>> soon as the vm-event interface is done, I plan to delete this whole
>>> waitqueue infrastructure anyway.
>>
>> In which case - should we revert the commit until this is resolved?
>
> In my opinion it is not that urgent. I don't think any of our OSStests
> will ever be able to trigger this issue, as AFAIK no test is using the
> wait_event() interface nor do they test suspend/resume. And both need
> to be true (at the same time!) plus a cpu needs to fail coming up when
> resuming again.

Yeah - I don't think reverting it is necessary, but I will flag
"resolving this somehow" as a 4.12 blocker.

The HVI scale tests trigger this path.  Guess how I discovered that
Introspection + Livepatching = boom.

I am leaning on the side of panic().  I agree that if the APIs are used
correctly, it can't occur.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.