[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH V5] x86/altp2m: Fix crash with INVALID_ALTP2M EPTP index



On 08/02/2018 09:32 AM, Tian, Kevin wrote:
>> From: Razvan Cojocaru [mailto:rcojocaru@xxxxxxxxxxxxxxx]
>> Sent: Wednesday, August 1, 2018 5:02 PM
>>
>> On 07/23/2018 01:29 PM, George Dunlap wrote:
>>> On 07/20/2018 07:02 PM, Razvan Cojocaru wrote:
>>>> On 07/20/2018 08:18 PM, George Dunlap wrote:
>>>>> Furthermore, imagine the following scenario:
>>>>>
>>>>> * dom0 enables altp2m on domain A
>>>>> * dom0 switches altp2m to view 1 on domain A
>>>>> * dom0 enables #VE on domain A
>>>>> * domain A has a vmexit
>>>>>   -> At this point, EPTP_INDEX is 0, so the vmexit code will drop a
>>>>> reference on altp2m index 1 and increase the reference count on
>> altp2m
>>>>> index 0 #
>>>>>
>>>>> My patch fixes the above issue, but your patch doesn't (AFAICT).  What
>>>>> altp2m_vcpu_destroy() did wasn't fundamentally buggy; it just
>>>>> highlighted the issue by doing the equivalent of putting 0xDEADBEEF in
>>>>> EPTP_INDEX; and what your patch did was to reverse that, by making
>>>>> EPTP_INDEX accidentally correct again the next time you ran your test.
>>>>>
>>>>> (Let me know if I'm wrong about that!)
>>>>
>>>> I do prefer your patch, but unless I'm missing something my patch is
>>>> doing the same thing (albeit in a slightly more convoluted manner): it's
>>>> calling altp2m_vcpu_update_p2m(v) _inside_
>>>> altp2m_vcpu_update_vmfunc_ve(v). That's all it does, other than
>> removing
>>>> the (now redundant) explicit altp2m_vcpu_update_p2m(v) call from
>>>> altp2m_vcpu_destroy():
>>>>
>>>> https://lists.xenproject.org/archives/html/xen-devel/2018-
>> 06/msg01898.html
>>>>
>>>> So for every hvm_funcs.altp2m_vcpu_update_vmfunc_ve(v) (i.e. the
>> vmx.c
>>>> function) that gets called, I also call altp2m_vcpu_update_p2m(v), which
>>>> properly sets EPTP_INDEX (just as your patch does by __vmwrite()ing it
>>>> directly in vmx_vcpu_update_vmfunc_ve(), but in a roundabout
>> manner).
>>>>
>>>> Did I misunderstand something?
>>>
>>> No, you didn't -- sorry, I must have been quite tired at that point. :-)
>>>
>>> What I was actually thinking of was that in your patch, the update
>>> happens in different vmcs_enter/exit critical section, whereas in mine
>>> it's in the same section.
>>>
>>> Looking through the code, it seems that the vmcs_enter/exit acts as a
>>> lock, by pausing and unpausing the vcpu if it's not the one we're
>>> currently running on (as well as actually grabbing a lock to prevent
>>> concurrent modification).  altp2m_vcpu_destroy() calls
>>> altp2m_vcpu_update_vmfunc_ve() with the vcpu paused, but the
>>> HVMOP_altp2m_vcpu_enable_notify hypercall doesn't seem to; which (I
>>> think) means there could still be a point between
>>> vmx_vcpu_update_vmfunc_ve() and vmx_vcpu_update_eptp() where a
>> vcpu
>>> could run and get the wrong EPTP_INDEX.
>>>
>>> It's possible my analysis is wrong there (I'm not intimately familiar
>>> with the code), but I think my patch is better anyway for a couple of
>>> reasons:
>>>
>>> * The logic to keep EPTP_INDEX in sync is explicit, and all in the same
>>> file.
>>>
>>> * It doesn't do unnecessary updates to other bits of state
>>>
>>> * If we ever have reason to call vmx_vcpu_update_vmfunc_ve() directly,
>>> we won't re-introduce this bug.  (Or to put it a different way: We don't
>>> have to remember that we can't call it directly.)
>>>
>>> Now we just need to get the VMX maintainers to sign off on it. :-)  Jun
>>> / Kevin, any thoughts?
>>
>> Ping for the VMX maintainers?
>>
> 
> yes, it makes sense to me.

Thanks!

I've just re-sent the patch as " [PATCH V6] x86/altp2m: Make sure
EPTP_INDEX is up-to-date when enabling #VE" on George's suggestion, so
that it will appear on the list in the traditional process as well.


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.