[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [ARM] SMC (and HVC) handling in hypervisor



On Mon, Feb 13, 2017 at 9:29 AM, Volodymyr Babchuk
<vlad.babchuk@xxxxxxxxx> wrote:
> Tamas,
>
> On 13 February 2017 at 18:20, Tamas K Lengyel <tamas.k.lengyel@xxxxxxxxx> 
> wrote:
>> On Fri, Feb 10, 2017 at 5:14 PM, Volodymyr Babchuk
>> <vlad.babchuk@xxxxxxxxx> wrote:
>>> Hello,
>>>
>>> This e-mail is sort of follow-up to the two threads: [1] (my thread
>>> about TEE interaction) and [2] (Edgar's thread regarding handling SMC
>>> calls in platform_hvc). I want to discuss more broad topic there.
>>>
>>> Obviously, there are growing number of SMC users and current state of
>>> SMC handling in Xen satisfies nobody. My team wants to handle SMCs in
>>> secure way, Xilinx wants to forward some calls directly to Secure
>>> Monitor, while allowing to handle other in userspace, etc.
>>>
>>> My proposition is to gather all requirements to SMC (and HVC) handling
>>> in one place (e.g. in this mail thread). After we' will have clear
>>> picture of what we want, we will be able to develop some solution,
>>> that will satisfy us all. At least, I hope so :)
>>>
>>> Also I want to remind, that there are ARM document called "SMC Calling
>>> Convention" [3]. According to it, any aarch64 hypervisor "must
>>> implement the Standard Secure and Hypervisor Service calls". At this
>>> moment XEN does not conform to this.
>>>
>>> So, lets get started with the requirements:
>>> 0. There are no much difference between SMC and HVC handling (at least
>>> according to SMCCC).
>>> 1. Hypervisor should at least provide own UUID and version while
>>> called by SMC/HVC
>>> 2. Hypervisor should forward some calls from dom0 directly to Secure
>>> Monitor (Xilinx use case)
>>> 3. Hypervisor should virtualize PSCI calls, CPU service calls, ARM
>>> architecture service calls, etc.
>>> 4. Hypervisor should handle TEE calls in a secure way (e.g. no
>>> untrusted handlers in Dom0 userspace).
>>> 5. Hypervisor should support multiple TEEs (at least at compilation time).
>>> 6. Hypervisor should do this as fast as possible (DRM playback use case).
>>> 7. All domains (including dom0) should be handled in the same way.
>>> 8. Not all domains will have right to issue certain SMCs.
>>> 9. Hypervisor will issue own SMCs in some cases.
>>
>> 10. Domains on which the monitor privileged call feature is enabled
>> (which is by default disabled for all domains) should not be able to
>> issue SMCs such that it reaches the firmware directly. Xen should not
>> bounce such calls to the firmware on behalf of the domain. Xen should
>> not alter the state of the domain automatically (ie. incrementing PC).
>> These calls should be exclusively transfered to the monitor subscriber
>> for further processing. HVC calls need not be included in the monitor
>> forwarding as long as the HVC call can be governed by XSM.
>
> Looks like you are describing how SMC handling implemented at this
> moment. I agree that one can use VM monitor feature to handle SMCs.
> But are there any use case for this? Probably, you can implement
> userspace-based TEE in privileged domain. But for me this ruins the
> whole idea of TEE.

Yes, I have two separate usecases for this exact setup. The first is
an experimental security setup for ARM (described in
https://www.sec.in.tum.de/publications/publication/322); the second is
stealthy malware analysis where untrusted code in a guest domain
should be only able to interact with Xen and not the firmware.

Also, not sure why having this option in Xen would ruin any other
system needing SMCs like TEE in your case. The two use-cases may not
be compatible with each other, ie. they could not be used
simultaneously. But having the option for the user to decide which one
it wants to use should have no detrimental effect.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.