[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Altp2m use with PML can deadlock Xen


  • To: Tamas K Lengyel <tamas.k.lengyel@xxxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Fri, 10 May 2019 16:21:05 +0100
  • Autocrypt: addr=andrew.cooper3@xxxxxxxxxx; prefer-encrypt=mutual; keydata= mQINBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABtClBbmRyZXcgQ29v cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPokCOgQTAQgAJAIbAwULCQgHAwUVCgkI CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt TQTBLzDKXok86LkCDQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAYkC HwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs 6+ahAA==
  • Cc: Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Razvan Cojocaru <rcojocaru@xxxxxxxxxxxxxxx>
  • Delivery-date: Fri, 10 May 2019 15:21:14 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Openpgp: preference=signencrypt

On 10/05/2019 16:09, Tamas K Lengyel wrote:
> On Fri, May 10, 2019 at 8:59 AM Andrew Cooper <andrew.cooper3@xxxxxxxxxx> 
> wrote:
>> On 10/05/2019 15:53, Razvan Cojocaru wrote:
>>> On 5/10/19 5:42 PM, Tamas K Lengyel wrote:
>>>> On Thu, May 9, 2019 at 10:19 AM Andrew Cooper
>>>> <andrew.cooper3@xxxxxxxxxx> wrote:
>>>>> On 09/05/2019 14:38, Tamas K Lengyel wrote:
>>>>>> Hi all,
>>>>>> I'm investigating an issue with altp2m that can easily be reproduced
>>>>>> and leads to a hypervisor deadlock when PML is available in hardware.
>>>>>> I haven't been able to trace down where the actual deadlock occurs.
>>>>>>
>>>>>> The problem seem to stem from hvm/vmx/vmcs.c:vmx_vcpu_flush_pml_buffer
>>>>>> that calls p2m_change_type_one on all gfns that were recorded the PML
>>>>>> buffer. The problem occurs when the PML buffer full vmexit happens
>>>>>> while the active p2m is an altp2m. Switching  p2m_change_type_one to
>>>>>> work with the altp2m instead of the hostp2m however results in EPT
>>>>>> misconfiguration crashes.
>>>>>>
>>>>>> Adding to the issue is that it seem to only occur when the altp2m has
>>>>>> remapped GFNs. Since PML records entries based on GFN leads me to
>>>>>> question whether it is safe at all to use PML when altp2m is used with
>>>>>> GFN remapping. However, AFAICT the GFNs in the PML buffer are not the
>>>>>> remapped GFNs and my understanding is that it should be safe as long
>>>>>> as the GFNs being tracked by PML are never the remapped GFNs.
>>>>>>
>>>>>> Booting Xen with ept=pml=0 resolves the issue.
>>>>>>
>>>>>> If anyone has any insight into what might be happening, please let
>>>>>> me know.
>>>>>
>>>>> I could have sworn that George spotted a problem here and fixed it.  I
>>>>> shouldn't be surprised if we have more.
>>>>>
>>>>> The problem that PML introduced (and this is mostly my fault, as I
>>>>> suggested the buggy solution) is that the vmexit handler from one vcpu
>>>>> pauses others to drain the PML queue into the dirty bitmap.  Overall I
>>>>> wasn't happy with the design and I've got some ideas to improve it, but
>>>>> within the scope of how altp2m was engineered, I proposed
>>>>> domain_pause_except_self().
>>>>>
>>>>> As it turns out, that is vulnerable to deadlocks when you get two vcpus
>>>>> trying to pause each other and waiting for each other to become
>>>>> de-scheduled.
>>>> Makes sense.
>>>>
>>>>> I see this has been reused by the altp2m code, but it *should* be safe
>>>>> to deadlocks now that it takes the hypercall_deadlock_mutext.
>>>> Is that already in staging or your x86-next branch? I would like to
>>>> verify that the problem is still present or not with that change. I
>>>> tested with Xen 4.12 release and that definitely still deadlocks.
>>> I don't know if Andrew is talking about this patch (probably not, but
>>> it looks at least related):
>>>
>>> http://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=24d5282527f4647907b3572820b5335c15cd0356;hp=29d28b29190ba09d53ae7e475108def84e16e363
>>>
>> I was referring to 29d28b2919 which is also in 4.12 as it turns out.
>> That said, 24d5282527 might in practice be the cause of the deadlock, so
>> I'd first experiment with taking that fix out.
>>
>> I know for certain that it won't be tested with PML enabled, because the
>> use of PML is incompatible with write-protecting guest pagetables.
>>
> Sounds like it's the safe bet to just have PML be disabled for when
> introspection is used. I would say it would be even better if the use
> of PML could be controlled on a per-guest base instead of the current
> global on/off switch. That way it could be disabled only for the
> introspected domains.
>
> I'll do some more experimentation when I get some free time but two
> observations that speak against the vCPUs trying to pause each other
> being the culprit is that:
> - the deadlock doesn't happen with xen-access' altp2m use, it only
> happens when there are remapped gfn's in the altp2m views
> - I've added a domain_pause/unpause to the PML flusher before it
> enters the flush loop but I still got a deadlock

Do you have a minimal repro of the deadlock you could share?

Even if it is a combo PML+altp2m problem, we should fix the issue,
because there are VMI usecases which don't care about write-protecting
guest pagetables, and we don't want to prevent those cases from using PML.

Thanks,

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.