[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 07/11] optee: add support for RPC SHM buffers



HI,

On 1/17/19 7:48 PM, Volodymyr Babchuk wrote:

Julien Grall writes:

Hi Volodymyr,

On 18/12/2018 21:11, Volodymyr Babchuk wrote:
From: Volodymyr Babchuk <vlad.babchuk@xxxxxxxxx>

OP-TEE usually uses the same idea with command buffers (see
previous commit) to issue RPC requests. Problem is that initially
it has no buffer, where it can write request. So the first RPC
request it makes is special: it requests NW to allocate shared
buffer for other RPC requests. Usually this buffer is allocated
only once for every OP-TEE thread and it remains allocated all
the time until shutdown.

By shutdown you mean for the OS or the thread?
Shutdown of OP-TEE actually. But guest can ask OP-TEE to de-allocate this
buffers. And this is what linux drivers does when it unloads.
So, basically, linux drivers says "I want to disable RPC buffer caching"
and then OP-TEE issues number of RPCs to free those buffers.


Mediator needs to pin this buffer(s) to make sure that domain can't
transfer it to someone else.

Life cycle of this buffer is controlled by OP-TEE. It asks guest
to create buffer and it asks it to free it.

Signed-off-by: Volodymyr Babchuk <vlad.babchuk@xxxxxxxxx>
---

   Changes from v2:
    - Added check to ensure that guests does not return two SHM buffers
      with the same cookie
    - Fixed coding style
    - Storing RPC parameters during RPC return to make sure, that guest
      will not change them during call continuation
      xen/arch/arm/tee/optee.c | 140
++++++++++++++++++++++++++++++++++++++-
   1 file changed, 138 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
index dc90e2ed8e..771148e940 100644
--- a/xen/arch/arm/tee/optee.c
+++ b/xen/arch/arm/tee/optee.c
@@ -30,6 +30,12 @@
    * OP-TEE spawns a thread for every standard call.
    */
   #define MAX_STD_CALLS   16
+/*
+ * Maximal number of pre-allocated SHM buffers. OP-TEE generally asks
+ * for one SHM buffer per thread, so this also corresponds to OP-TEE
+ * option CFG_NUM_THREADS
+ */

Same as patch #6 regarding CFG_NUM_THREADS.
Right now OP-TEE will not allocate more than one buffer per OP-TEE
thread. And I can see no reason why it would change. So, basically I can
remove this MAX_RPC_SHMS at all and use MAX_STD_CALLS instead. But then
it will be not so obvious, why I compare number of SHM buffers with
number of std calls. Thus, I think it is good to have separate
define and comment.

I am not against have the 2 defines, what I was pointed out with the documentation on top is incorrect as patch #6.

If you happen to make MAX_STD_CALLS dynamic, then this should also be dynamic.


[...]
@@ -227,11 +247,90 @@ static void put_std_call(struct optee_domain *ctx, struct 
optee_std_call *call)
       spin_unlock(&ctx->lock);
   }
   +static struct shm_rpc *allocate_and_pin_shm_rpc(struct
optee_domain *ctx,
+                                                paddr_t gaddr,

As I said on v3, I would prefer if you use gfn_t here. This would
introduce more safety.
Sure, will do.

[...]

+    shm_rpc->guest_page = get_page_from_gfn(current->domain,
+                                            paddr_to_pfn(gaddr),
+                                            NULL,
+                                            P2M_ALLOC);

I think it would be wrong to share any page other than p2m_ram_rw with OP-TEE.

So it should be like this:

     shm_rpc->guest_page = get_page_from_gfn(current->domain,
                                             paddr_to_pfn(gaddr),
                                             &p2m,
                                             P2M_ALLOC);
     if ( !shm_rpc->guest_page || p2m != p2m_ram_rw)
         goto err;

?

Sounds good to me.


[...]

+static void free_shm_rpc(struct optee_domain *ctx, uint64_t cookie)
+{
+    struct shm_rpc *shm_rpc;
+    bool found = false;
+
+    spin_lock(&ctx->lock);
+
+    list_for_each_entry( shm_rpc, &ctx->shm_rpc_list, list )
+    {
+        if ( shm_rpc->cookie == cookie )
+        {
+            found = true;
+            list_del(&shm_rpc->list);
+            break;
+        }
+    }
+    spin_unlock(&ctx->lock);

I think you are missing an atomic_dec(&ctx->shm_rpc_count) here.
Good catch. Thank you.

[...]
   +static void handle_rpc_func_alloc(struct optee_domain *ctx,
+                                  struct cpu_user_regs *regs)
+{
+    paddr_t ptr = get_user_reg(regs, 1) << 32 | get_user_reg(regs, 2);
+
+    if ( ptr & (OPTEE_MSG_NONCONTIG_PAGE_SIZE - 1) )
+        gprintk(XENLOG_WARNING, "Domain returned invalid RPC command 
buffer\n");

Should not you bail-out in that case? Also, I would turn it to a gdprintk.
OP-TEE does own checks and that check will fail also. Then OP-TEE will
issue request to free this SHM.

I think it is better if we go on the safe-side. I.e if we know there would be an error (like here), then you need to return an error in from Xen rather than calling OP-TEE. Otherwise, you may end up to nasty problem.

Also, I think you want a comment (maybe in the commit message) explaining that OPTEE_MSG_NONCONTIG_PAGE_SIZE will always be equal to PAGE_SIZE.


But you have a point. I need to rework error path there.

[...]
       case OPTEE_SMC_RPC_FUNC_FREE:
-        /* TODO: Add handling */
+    {
+        uint64_t cookie = call->rpc_params[0] << 32 |
+                            (uint32_t)call->rpc_params[1];

The indentation looks weird here.
You are right. How it should look? Would this be okay?

         uint64_t cookie = call->rpc_params[0] << 32 |
                 (uint32_t)call->rpc_params[1];

call and (uint32_t) should be aligned:

uint64_t cookie = call->rcp... |
                  (uint32_t)call..

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.