[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.



On Tue, Aug 2, 2016 at 10:05 AM, George Dunlap <george.dunlap@xxxxxxxxxx> wrote:
> On 02/08/16 16:48, Tamas K Lengyel wrote:
>> On Tue, Aug 2, 2016 at 5:17 AM, George Dunlap <george.dunlap@xxxxxxxxxx> 
>> wrote:
>>> On 02/08/16 08:38, Julien Grall wrote:
>>>> Hello Tamas,
>>>>
>>>> On 01/08/2016 21:41, Tamas K Lengyel wrote:
>>>>> On Mon, Aug 1, 2016 at 1:55 PM, Julien Grall <julien.grall@xxxxxxx>
>>>>> wrote:
>>>>>>> we did discuss whether altp2m on ARM should be exposed to guests or
>>>>>>> not but we did not agree whether restricting it on ARM is absolutely
>>>>>>> necessary. Altp2m was designed even on the x86 to be accessible from
>>>>>>> within the guest on all systems irrespective of actual hardware
>>>>>>> support for it.  Thus, this design fits ARM as well where there is no
>>>>>>> dedicated hardware support, from the altp2m perspective there is no
>>>>>>> difference.
>>>>>>
>>>>>>
>>>>>> Really? I looked at the design document [1] which is Intel focus.
>>>>>> Similar
>>>>>> think to the code (see p2m_flush_altp2m in arch/x86/mm/p2m.c).
>>>>>
>>>>> That design cover letter mentions specifically "Both VMFUNC and #VE
>>>>> are designed such that a VMM can emulate them on legacy CPUs". While
>>>>> they certainly had only Intel hardware in-mind, the software route can
>>>>> also be taken on ARM as well. As our primary use-case is purely
>>>>> external use of altp2m we have not implemented the bits that enable
>>>>> the injection of mem_access faults into the guest (equivalent of #VE).
>>>>> Whether without that the altp2m switching from within the guest make
>>>>> sense or not is beyond the scope of this series but as it could
>>>>> technically be implemented in the future, I don't see a reason to
>>>>> disable that possibility right away.
>>>>
>>>> The question here, is how a guest could take advantage to access to
>>>> altp2m on ARM today? Whilst on x86 a guest could be notified about
>>>> memaccess change, this is not yet the case on ARM.
>>>>
>>>> So, from my understanding, exposing this feature to a guest is like
>>>> exposing a no-op with side effects. We should avoid to expose feature to
>>>> the guest until there is a real usage and the guest could do something
>>>> useful with it.
>>>
>>> It seems like having guest altp2m support without the equivalent of a
>>> #VE does seem pretty useless.  Would you disagree with this assessment,
>>> Tamas?
>>>
>>> Every interface we expose to the guest increases the surface of attack;
>>> so it seems like until there is a usecase for guest altp2m, we should
>>> probably disable it.
>>>
>>
>> Hi George,
>> I disagree. On x86 using VMFUNC EPTP switching is not bound to #VE in
>> any way. The two can certainly benefit from being used together but
>> there is no enforced interdependence between the two. It is certainly
>> possible to derive a use-case for just having the altp2m switch
>> operations available to the guest. For example, I could imagine the
>> gfn remapping be used to protect kernel memory areas against
>> information disclosure by only switching to the accessible mapping
>> when certain conditions are met.
>
> That's true -- I suppose gfn remapping is something that would be useful
> even without #VE.
>
>> As our usecase is purely external implementing the emulated #VE at
>> this time has been deemed out-of-scope but it could be certainly
>> implemented for ARM as well. Now that I'm thinking about it it might
>> actually not be necessary to implement the #VE at all the way x86 does
>> by injecting an interrupt as we might just be able to allow the domain
>> to enable the existing mem_access ring directly.
>
> That would be a possibility, but before that could be considered a
> feature we'd need someone to go through and make sure that this
> self-mem_access funcitonality worked properly.  (And I take it at the
> moment that's not work you're volunteering to do.)

Right, not at this time, it's a bit beyond our scope for now.

Thanks,
Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.