[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 00/25] arm/altp2m: Introducing altp2m to ARM.



On Wed, Aug 3, 2016 at 8:08 AM, Julien Grall <julien.grall@xxxxxxx> wrote:
> Hello Sergej,
>
> Please try to reply to all when answering on the ML. Otherwise the answer
> may be delayed/lost.
>
> On 03/08/16 13:45, Sergej Proskurin wrote:
>>
>> The interesting part about #VE is that it allows to handle certain
>> violations (currently limited to EPT violations -- future
>> implementations might introduce also further violations) inside of the
>> guest, without the need to explicitly trap into the VMM. Thus, #VE allow
>> switching of different memory views in-guest. Because of this, I also
>> agree that event channels would suffice in our case, since we do not
>> have sufficient hardware support on ARM and would need to trap into the
>> VMM anyway.
>
>
> The cost of doing an hypercall on ARM is very small compare to x86 (~1/3 of
> the number of x86 cycles) because we don't have to save all the state every
> time. So I am not convinced by the argument of limiting the number of trap
> to the hypervisor and allow a guest to play with altp2m on ARM.
>
> I will have to see a concrete example before going forward with the event
> channel.

It is out-of-scope for what we are trying to achieve with this series
at this point. The question at hand is really whether the atp2m switch
and gfn remapping ops should be exposed to the guest. Without #VE -
which we are not implementing - setting the mem_access settings from
within the guest doesn't make sense so restricting access there is
reasonable.

As I outlined, the switch and gfn remapping can have legitimate
use-cases by themselves without any mem_access bits involved. However,
it is not our use-case so we have no problem restricting access there
either. So the question is whether that's the right path to take here.
At this point I'm not sure there is agreement about it or not.

Thanks,
Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.