[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 08/13] optee: add support for RPC SHM buffers





On 09/12/2018 02:51 PM, Volodymyr Babchuk wrote:
Hi,

Hi,


On 12.09.18 13:59, Julien Grall wrote:
Hi Volodymyr,

On 09/11/2018 08:30 PM, Volodymyr Babchuk wrote:
On 11.09.18 14:53, Julien Grall wrote:
On 10/09/18 18:44, Volodymyr Babchuk wrote:
On 10.09.18 16:01, Julien Grall wrote:
On 03/09/18 17:54, Volodymyr Babchuk wrote:
OP-TEE usually uses the same idea with command buffers (see
previous commit) to issue RPC requests. Problem is that initially
it has no buffer, where it can write request. So the first RPC
request it makes is special: it requests NW to allocate shared
buffer for other RPC requests. Usually this buffer is allocated
only once for every OP-TEE thread and it remains allocated all
the time until shutdown.

Mediator needs to pin this buffer(s) to make sure that domain can't
transfer it to someone else. Also it should be mapped into XEN
address space, because mediator needs to check responses from
guests.

Can you explain why you always need to keep the shared buffer mapped in Xen? Why not using access_guest_memory_by_ipa every time you want to get information from the guest?
Sorry, I just didn't know about this mechanism. But for performance reasons,
I'd like to keep this buffers always mapped. You see, RPC returns are
very frequent (for every IRQ, actually). So I think, it will be costly
to map/unmap this buffer every time.

This is a bit misleading... This copy will *only* happen for IRQ during an RPC. What are the chances for that? Fairly limited. If this is happening too often, then the map/unmap here will be your least concern.
Now, this copy will happen for every IRQ when CPU is in S-EL1/S-EL0 mode. Chances are quite high, I must say. Look: OP-TEE or (TA) is doing something, like encrypting some buffer, for example. IRQ fires, OP-TEE immediately executes RPC return (right from interrupt handler), so NW can handle interrupt. Then NW returns control back to OP-TEE, if it wants to.

I understand this... But the map/unmap should be negligible over the rest of the context.
I thought that map/unmap is quite costly operation, but I can be wrong there.

At the moment, map/unmap is nearly a nop on Arm64 because all the RAM is mapped (I would avoid to assume that thought :)). The only cost if going through the p2m to translate the IPA to PA.

For Arm32, each CPUs has its own page-tables and the map/unmap (and TLB flush) will be done locally. I would still expect the impact to be minimal.

Note that today map_domain_page on Arm32 is quite simplistic. It would be possible to optimize it for lowering the impact of map/unmap.

[...]



It feels quite suspicious to free the memory in Xen before calling OP-TEE. I think this need to be done afterwards.

No, it is OP-TEE asked to free buffer. This function is called, when NW returns from the RPC. So at this moment NW freed the buffer.

But you forward that call to OP-TEE after. So what would OP-TEE do with that?
Happily resume interrupted work. There is how RPC works:

1. NW client issues STD call (or yielding call in terms of SMCCC)
2. OP-TEE starts its work, but it is needed to be interrupted for some
    reason: IRQ arrived, it wants to block on a mutex, it asks NW to do
    some work (like allocating memory or loading TA). This is called "RPC
    return".
3. OP-TEE suspends thread and does return from SMC call with code
    OPTEE_SMC_RPC_VAL(SOME_CMD) in a0, and some optional parameters in
    other registers
4. NW sees that this is a RPC, and not completed STD call, so it does
    SOME_CMD and  issues another SMC with code
    OPTEE_SMC_CALL_RETURN_FROM_RPC in a0
5. OP-TEE wakes up suspended thread and continues execution
6. pts 2-5 are repeated until OP-TEE finishes the work
7. It returns from last SMC call with code OPTEE_SMC_RETURN_SUCCESS/
    OPTEE_SMC_RETURN_some_error in a0.
8. optee driver sees that call from pt.1 is finished at least and
    returns control back to client

Thank you for the explanation. As I mentioned in another thread, it would be good to have some kind of highly level explanation in the tree and all those interaction. If it is already existing, then pointer in the code.
High level is covered at [1], and  low level is covered in already mentioned header files.

Could you add those pointers at the top of the OP-TEE file?

But I don't know about any explanation at detail level I gave you above.

That's fine. Can you add that in the commit message?

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.