[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 1/2] arm/mem_access: adjust check_and_get_page to not rely on current



On Tue, Dec 13, 2016 at 5:50 AM, Julien Grall <julien.grall@xxxxxxx> wrote:
> Hello Tamas,
>
> On 12/12/16 23:47, Tamas K Lengyel wrote:
>>
>> On Mon, Dec 12, 2016 at 2:28 PM, Julien Grall <julien.grall@xxxxxxx>
>> wrote:
>>>
>>> On 12/12/2016 19:41, Tamas K Lengyel wrote:
>>>>
>>>> On Mon, Dec 12, 2016 at 12:11 PM, Julien Grall <julien.grall@xxxxxxx>
>>>> wrote:
>>>>>
>>>>> On 12/12/16 18:42, Tamas K Lengyel wrote:
>>>>>>
>>>>>> On Mon, Dec 12, 2016 at 4:46 AM, Julien Grall <julien.grall@xxxxxxx>
>>>>>> wrote:
>>>>
>>>> I see. So IMHO this is not a problem with mem_access in general, but a
>>>> problem with a specific application of mem_access on ARM (ie.
>>>> restricting read access to guest pagetables). It's a pitty that ARM
>>>> doesn't report the IPA automatically during a stage-2 violation.
>>>
>>>
>>>
>>> I don't understand what you are asking for here. If you are not able to
>>> access stage-1 page table how would you be able to find the IPA?
>>
>>
>> I'm not asking for anything, I'm expressing how it's a pity that ARM
>> CPUs are limited in this regard compared to x86.
>
>
> Give a look at the ARM ARM before complaining. The IPA will be provided (see
> HPFAR) on a stage-2 data/prefetch abort fault.
>
>>
>>>
>>> It works on x86 because, IIRC, you do a software page table walking.
>>> Although, I don't think you have particular write/read access checking on
>>> x86.
>>
>>
>> I don't recall there being any software page walking being involved on
>> x86. Why would that be needed? On x86 we get the guest physical
>> address recorded by the CPU automatically. AFAIK in case the pagetable
>> was unaccessible for the translation of a VA, we would get an event
>> with the pagetable's PA and the type of event that generated it (ie.
>> reading during translation).
>
>
> You are talking about a different thing. The function
> p2m_mem_access_check_and_get_page is only used during copy_*_guest helpers
> which will copy hypercall buffer.
>
> If you give a look at the x86 code, for simplicity let's focus on HVM, the
> function __hvm_copy will call paging_gva_to_gfn which is doing the
> table walk in software (see arch/x86/mm/hap/guest_walk.c). No hardware
> instruction like on ARM...
>
> Although, it looks like that there is hardware instruction to do address
> translation (see nvmx_hap_walk_L1_p2m), but only for nested virtualization.
> And even in this case, they will return the IPA (e.g guest PA) only if
> stage-1 page table are accessible.
>
>>
>>>
>>>> A way to work around this would require mem_access restrictions to be
>>>> complete removed, which cannot be done unless all other vCPUs of the
>>>> domain are paused to avoid a race-condition. With altp2m I could also
>>>> envision creating a temporary p2m for the vcpu at hand with the
>>>> restriction removed, so that it doesn't affect other vcpus. However,
>>>> without a use-case specifically requiring this to be implemented I
>>>> would not deem it critical.
>>>
>>>
>>>
>>> I suggested a use-case on the previous e-mail... You are affected today
>>> because Linux is creating hypercall buffer on the stack and heap. So the
>>> memory would always be accessed before. I could foresee guest using const
>>> hypercall buffer.
>>>
>>>> For now a comment in the header describing
>>>> this limitation would suffice from my perspective.
>>>
>>>
>>>
>>> So you are going to defer everything until someone actually hit it? It
>>> might
>>> be time for you to focus a bit more on other use case...
>>>
>>
>> Yes, as long as this is not a critical issue that breaks mem_access
>> and can be worked around I'll postpone spending time on it. If someone
>> finds the time in the meanwhile to submit a patch fixing it I would be
>> happy to review and test it.
>
>
> I will be happy to keep the memaccess code in p2m.c until I see a strong
> reason to move it in a separate file.
>

Does that mean you want to take over maintainership of mem_access on
ARM? Otherwise I don't think that is an acceptable reason to keep it
p2m.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.