[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 2/2] xen: merge temporary vcpu pinning scenarios


  • To: Juergen Gross <jgross@xxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Fri, 26 Jul 2019 11:13:54 +0100
  • Authentication-results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=andrew.cooper3@xxxxxxxxxx; spf=Pass smtp.mailfrom=Andrew.Cooper3@xxxxxxxxxx; spf=None smtp.helo=postmaster@xxxxxxxxxxxxxxx
  • Autocrypt: addr=andrew.cooper3@xxxxxxxxxx; prefer-encrypt=mutual; keydata= mQINBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABtClBbmRyZXcgQ29v cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPokCOgQTAQgAJAIbAwULCQgHAwUVCgkI CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt TQTBLzDKXok86LkCDQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAYkC HwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs 6+ahAA==
  • Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>, WeiLiu <wl@xxxxxxx>, Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>, George Dunlap <George.Dunlap@xxxxxxxxxxxxx>, Tim Deegan <tim@xxxxxxx>, Ian Jackson <ian.jackson@xxxxxxxxxxxxx>, Dario Faggioli <dfaggioli@xxxxxxxx>, Julien Grall <julien.grall@xxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>
  • Delivery-date: Fri, 26 Jul 2019 10:14:04 +0000
  • Ironport-sdr: F50K7jm9d4qdgfNy7gUfGqrDmsUHsTZOXcFV2kwmWrqmUGEO9TyP2O6uD4JwrpXxDaXWAloS0R gTpxeAeto3RBrpqS5HMVUBwK0Vh1xnqDvWsUcusQVpb3keRFprq2zW6nHu9ffg8s1U9LEPyskj V+aB6kHDn92kdD3b/Dly/jajP/simUoqlqeLCY0Ym5yOvt+KrMyYWhAzo7yBwvlCz8enwgmHSq k2HwVlfEVyNLbAhtYhbUR3+Psr5dfDrNpLzJdWE8FPqSO9jLq6GUXUpn8bWUjBwHvyDTkaRNlh k4Y=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Openpgp: preference=signencrypt

On 26/07/2019 10:53, Juergen Gross wrote:
> On 26.07.19 11:46, Andrew Cooper wrote:
>> On 24/07/2019 12:26, Juergen Gross wrote:
>>> diff --git a/xen/common/wait.c b/xen/common/wait.c
>>> index 4f830a14e8..3fc5f68611 100644
>>> --- a/xen/common/wait.c
>>> +++ b/xen/common/wait.c
>>> @@ -34,8 +34,6 @@ struct waitqueue_vcpu {
>>>        */
>>>       void *esp;
>>>       char *stack;
>>> -    cpumask_t saved_affinity;
>>> -    unsigned int wakeup_cpu;
>>>   #endif
>>>   };
>>>   @@ -131,12 +129,10 @@ static void __prepare_to_wait(struct
>>> waitqueue_vcpu *wqv)
>>>       ASSERT(wqv->esp == 0);
>>>         /* Save current VCPU affinity; force wakeup on *this* CPU
>>> only. */
>>> -    wqv->wakeup_cpu = smp_processor_id();
>>> -    cpumask_copy(&wqv->saved_affinity, curr->cpu_hard_affinity);
>>> -    if ( vcpu_set_hard_affinity(curr, cpumask_of(wqv->wakeup_cpu)) )
>>> +    if ( vcpu_temporary_affinity(curr, smp_processor_id(),
>>> VCPU_AFFINITY_WAIT) )
>>>       {
>>>           gdprintk(XENLOG_ERR, "Unable to set vcpu affinity\n");
>>> -        domain_crash(current->domain);
>>> +        domain_crash(curr->domain);
>>>             for ( ; ; )
>>>               do_softirq();
>>> @@ -170,7 +166,7 @@ static void __prepare_to_wait(struct
>>> waitqueue_vcpu *wqv)
>>>       if ( unlikely(wqv->esp == 0) )
>>>       {
>>>           gdprintk(XENLOG_ERR, "Stack too large in %s\n", __func__);
>>> -        domain_crash(current->domain);
>>> +        domain_crash(curr->domain);
>>>             for ( ; ; )
>>>               do_softirq();
>>> @@ -182,30 +178,24 @@ static void __prepare_to_wait(struct
>>> waitqueue_vcpu *wqv)
>>>   static void __finish_wait(struct waitqueue_vcpu *wqv)
>>>   {
>>>       wqv->esp = NULL;
>>> -    (void)vcpu_set_hard_affinity(current, &wqv->saved_affinity);
>>> +    vcpu_temporary_affinity(current, NR_CPUS, VCPU_AFFINITY_WAIT);
>>>   }
>>>     void check_wakeup_from_wait(void)
>>>   {
>>> -    struct waitqueue_vcpu *wqv = current->waitqueue_vcpu;
>>> +    struct vcpu *curr = current;
>>> +    struct waitqueue_vcpu *wqv = curr->waitqueue_vcpu;
>>>         ASSERT(list_empty(&wqv->list));
>>>         if ( likely(wqv->esp == NULL) )
>>>           return;
>>>   -    /* Check if we woke up on the wrong CPU. */
>>> -    if ( unlikely(smp_processor_id() != wqv->wakeup_cpu) )
>>> +    /* Check if we are still pinned. */
>>> +    if ( unlikely(!(curr->affinity_broken & VCPU_AFFINITY_WAIT)) )
>>>       {
>>> -        /* Re-set VCPU affinity and re-enter the scheduler. */
>>> -        struct vcpu *curr = current;
>>> -        cpumask_copy(&wqv->saved_affinity, curr->cpu_hard_affinity);
>>> -        if ( vcpu_set_hard_affinity(curr,
>>> cpumask_of(wqv->wakeup_cpu)) )
>>> -        {
>>> -            gdprintk(XENLOG_ERR, "Unable to set vcpu affinity\n");
>>> -            domain_crash(current->domain);
>>> -        }
>>> -        wait(); /* takes us back into the scheduler */
>>> +        gdprintk(XENLOG_ERR, "vcpu affinity lost\n");
>>> +        domain_crash(curr->domain);
>>>       }
>>
>> I'm sorry to retract my R-by after the fact, but I've only just noticed
>> (while rebasing some of my pending work over this) that it is buggy.
>>
>> The reason wait() was called is because it is not safe to leave that
>> if() clause.
>>
>> With this change in place, we'll arrange for the VM to be crashed, then
>> longjump back into the stack from from the waiting vCPU, on the wrong
>> CPU.  Any caller with smp_processor_id() or thread-local variables cache
>> by pointer on the stack will then cause memory corruption.
>>
>> Its not immediately obvious how to fix this, but bear in mind that as
>> soon as the vm-event interface is done, I plan to delete this whole
>> waitqueue infrastructure anyway.
>
> Shouldn't just calling wait() after domain_crash() be fine then?
>
> That's what would have happened in the original error case, too.

No - I don't think so.  That was to try and get back into a position
where the scheduler rescheduled this vcpu on the correct cpu, so it
could safely longjmp back into context.

With the domain crash here, nothing will happen[1] until we do
successfully longjmp() back into context, because we've got a stack
frame which needs unwinding before it is safe to start cleaning the
domain up.

~Andrew

[1] If something other than nothing happens, then we've got a
refcounting issue...

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.