[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC] WIP: optee: add OP-TEE mediator



Hi Julien,

On Mon, Dec 04, 2017 at 02:30:32PM +0000, Julien Grall wrote:
> Hi,
> 
> I am going to answer both e-mails (Stefano and Volodymyr) at once.
> 
> On 01/12/17 22:58, Stefano Stabellini wrote:
> >On Mon, 27 Nov 2017, Volodymyr Babchuk wrote:
> >>This is follow-up to our conversation during community call.
> >>You asked me to send OP-TEE mediator as a patch, so we can
> >>discuss it in the mailing list. So, there it is. I squashed
> >>two patches into one and skipped patches that we already
> >>discussed.
> >>
> >>So, this is basically all what is needed to support OP-TEE in XEN.
> >>There are some TODOs left all over the code. But, I don't
> >>expect that TODOs implementation would significantly
> >>increase codebase. Currently mediator parses requests to perform
> >>addresses translation and that's all what is should be done
> >>to allow guests to work with OP-TEE.
> >>
> >>This become possible because I completely revisited virtualization
> >>support in OP-TEE. I have found way to enforce complete isolation
> >>between different guest states. This lifts many questions like usage
> >>quotas, RPC routing, sudden guest death, data isolation, etc.
> 
> I disagree here. You still have to handle sudden guest death in Xen and
> release any memory allocated in the hypervisor for that guests.
I was speaking from OP-TEE point-of-view.

From mediator side, there is callback optee_domain_destroy() with
comment "TODO: Clean call contexts and SHMs associated with
domain". This callback will be called during domain description and it
will free any resources that mediator allocated on behalf of the
guest.

> >>
> >>I'm aware that I didn't addressed all comments from previous
> >>discussion. Sorry for this. I'm currently busy with OP-TEE,
> >>and I think proper mediator implementation will be possible
> >>only after I'll stabilize OP-TEE part.
> >>
> >>So I don't ask anybody to do thorough review. I just want to
> >>share my status and discuss this code a whole.
> >
> >Thank you for sharing! Actually, I think it is not too bad as a starting
> >point.
> >
> >I'll also try to summarize some key concept we have been discussing
> >about OP-TEE support in Xen.
> >
> >
> >= Xen cannot protect the system from a broken/insecure OP-TEE =
> >
> >OP-TEE runs at a higher privilege level than Xen, thus, we can't really
> >expect Xen to protect the system from a broken OP-TEE. Also, Xen cannot
> >really protect OP-TEE from a malicious caller.
> >
> >What we can and should do is to protect Xen, the OP-TEE mediator in Xen
> >specifically, from malicious attackers.
> >
> >In other words, we are not responsible if a call, forwarded to OP-TEE by
> >Xen, ends up crashing OP-TEE, therefore taking down the system.
> >
> >However, we have to pay special care to avoid callers to crash or take
> >over the mediator in Xen. We also have to pay attention so that a caller
> >won't be able to exhaust Xen resources or DOS Xen (allocate too much
> >memory, infinite loops in Xen, etc). This brings me to the next topic.
> >
> >
> >= Error checking / DOS protection =
> >
> >We need powerful checks on arguments passed by the caller and evaluated
> >by the mediator.
> >
> >For example, we cannot expect the guest to actually pass arguments in
> >the format expected by translate_params. ctx->xen_arg could be
> >gibberish.
> >
> > From the resource allocation point of view, it looks like every
> >handle_std_call allocates a new context; every copy_std_request
> >allocates a new Xen page. It would be easy to exhaust Xen resources.
> >Maybe we need a max concurrent request limit or max page allocation per
> >domain or something of the kind.
> >
> >
> >= Locks and Lists =
> >
> >The current lock and list is global. Do you think it should actually be
> >global or per-domain? I do realize that only dom0 is allowed to make
> >calls now so the question for the moment is not too useful.
> 
> Hmmm, the commit message says: "With this patch OP-TEE successfully passes
> own tests, while client is running in DomU.". As said above, we need to make
> sure that Xen will release memory allocated for SMC requests when a domain
> dies. So have a list/lock per domain would make more sense (easier to go
> through).
You are totaly right. I can't say why I did in a way I did :-) I think,
it was easier approach during initial implementation.
But I certainly plan to hold separete context with own lists/locks for
every domain. This will also ease cleanup, because you are holding
all domain data in one place.

> >
> >
> >= Xen command forwarding =
> >
> >In the code below, it looks like Xen is forwarding everything to OP-TEE.
> >Are there some commands Xen should avoid forwarding? Should we have a
> >whitelist or a blacklist?
> >
> >
> >= Long running OP-TEE commands and interruptions =
> >
> >I have been told that some OP-TEE RPC commands might take long to
> >complete. Is that right? Like for example one of the
> >OPTEE_MSG_RPC_CMD_*?
> >
> >If so, we need to think what to do in those cases. Specifically, do we
> >need a technique to restart certain commands in Xen, so that when they
> >run for too long and get interrupted by something (such as an
> >interrupt) we know how to restart them? In fact, do we need to setup a
> >timer interrupt to make sure the command doesn't block Xen for too long,
> >consuming the next vcpu's slot as well?
> 
> I am not sure to understand your suggestion here. Where do you want that
> timer? In Xen? If so, I don't think this could work. That's OP-TEE job to
> break down long running operation.
> 
> This is very similar to when a guest is doing an hypercall. Even if setup a
> timer, the interrupt will only be received once the hypercall is done (or
> Xen decided to preempt it).
> 
> >
> >
> >= Page pinning =
> >
> >Guest pages passed to OP-TEE need to be pinned (otherwise Xen doesn't
> >guarantee they'll be available). I think the right function to call for
> >that is get_page_from_gfn or get_page.
> 
> No direct use of get_page please. We already have plenty of helper to deal
> with the translation GFN -> Page or even copying data. I don't want to see
> more open-coding version because it makes difficult to interaction with
> other features such as memaccess and PoD.
Okay. Could you please suggest what API should be used in my case?

> >>---
> >>
> >>Add OP-TEE mediator as an example how TEE mediator framework
> >>works.
> >>
> >>OP-TEE mediator support address translation for DomUs.
> >>It tracks execution of STD calls, correctly handles memory-related RPC
> >>requests, tracks buffer allocated for RPCs.
> >>
> >>With this patch OP-TEE successfully passes own tests, while client is
> >>running in DomU. Currently it lacks some code for exceptional cases,
> >>because this patch was used mostly to debug virtualization in OP-TEE.
> >>Nevertheless, it provides all features needed for OP-TEE mediation.
> >>
> >>WARNING: This is a development patch, it does not cover all corner
> >>cases, so, please don't use it in production.
> >>
> >>It was tested on RCAR Salvator-M3 board.
> >>
> >>Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@xxxxxxxx>
> >>---
> >>  xen/arch/arm/tee/Kconfig     |   4 +
> >>  xen/arch/arm/tee/Makefile    |   1 +
> >>  xen/arch/arm/tee/optee.c     | 765 
> >> +++++++++++++++++++++++++++++++++++++++++++
> >>  xen/arch/arm/tee/optee_smc.h |  50 +++
> >>  4 files changed, 820 insertions(+)
> >>  create mode 100644 xen/arch/arm/tee/optee.c
> >>
> >>diff --git a/xen/arch/arm/tee/Kconfig b/xen/arch/arm/tee/Kconfig
> >>index e69de29..7c6b5c6 100644
> >>--- a/xen/arch/arm/tee/Kconfig
> >>+++ b/xen/arch/arm/tee/Kconfig
> >>@@ -0,0 +1,4 @@
> >>+config ARM_OPTEE
> >>+   bool "Enable OP-TEE mediator"
> >>+   default n
> >>+   depends on ARM_TEE
> >>diff --git a/xen/arch/arm/tee/Makefile b/xen/arch/arm/tee/Makefile
> >>index c54d479..9d93b42 100644
> >>--- a/xen/arch/arm/tee/Makefile
> >>+++ b/xen/arch/arm/tee/Makefile
> >>@@ -1 +1,2 @@
> >>  obj-y += tee.o
> >>+obj-$(CONFIG_ARM_OPTEE) += optee.o
> >>diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
> >>new file mode 100644
> >>index 0000000..59c3600
> >>--- /dev/null
> >>+++ b/xen/arch/arm/tee/optee.c
> >>@@ -0,0 +1,765 @@
> >>+/*
> >>+ * xen/arch/arm/tee/optee.c
> >>+ *
> >>+ * OP-TEE mediator
> >>+ *
> >>+ * Volodymyr Babchuk <volodymyr_babchuk@xxxxxxxx>
> >>+ * Copyright (c) 2017 EPAM Systems.
> >>+ *
> >>+ * This program is free software; you can redistribute it and/or modify
> >>+ * it under the terms of the GNU General Public License version 2 as
> >>+ * published by the Free Software Foundation.
> >>+ *
> >>+ * This program is distributed in the hope that it will be useful,
> >>+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
> >>+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> >>+ * GNU General Public License for more details.
> >>+ */
> >>+
> >>+#include <xen/domain_page.h>
> >>+#include <xen/types.h>
> >>+#include <xen/sched.h>
> >>+
> >>+#include <asm/p2m.h>
> >>+#include <asm/tee.h>
> >>+
> >>+#include "optee_msg.h"
> >>+#include "optee_smc.h"
> >>+
> >>+/*
> >>+ * Global TODO:
> >>+ *  1. Create per-domain context, where call and shm will be stored
> >>+ *  2. Pin pages shared between OP-TEE and guest
> >>+ */
> >>+/*
> >>+ * OP-TEE violates SMCCC when it defines own UID. So we need
> >>+ * to place bytes in correct order.
> >>+ */
> >>+#define OPTEE_UID  (xen_uuid_t){{                                          
> >>     \
> >>+    (uint8_t)(OPTEE_MSG_UID_0 >>  0), (uint8_t)(OPTEE_MSG_UID_0 >>  8),    
> >>     \
> >>+    (uint8_t)(OPTEE_MSG_UID_0 >> 16), (uint8_t)(OPTEE_MSG_UID_0 >> 24),    
> >>     \
> >>+    (uint8_t)(OPTEE_MSG_UID_1 >>  0), (uint8_t)(OPTEE_MSG_UID_1 >>  8),    
> >>     \
> >>+    (uint8_t)(OPTEE_MSG_UID_1 >> 16), (uint8_t)(OPTEE_MSG_UID_1 >> 24),    
> >>     \
> >>+    (uint8_t)(OPTEE_MSG_UID_2 >>  0), (uint8_t)(OPTEE_MSG_UID_2 >>  8),    
> >>     \
> >>+    (uint8_t)(OPTEE_MSG_UID_2 >> 16), (uint8_t)(OPTEE_MSG_UID_2 >> 24),    
> >>     \
> >>+    (uint8_t)(OPTEE_MSG_UID_3 >>  0), (uint8_t)(OPTEE_MSG_UID_3 >>  8),    
> >>     \
> >>+    (uint8_t)(OPTEE_MSG_UID_3 >> 16), (uint8_t)(OPTEE_MSG_UID_3 >> 24),    
> >>     \
> >>+    }}
> >>+
> >>+#define MAX_NONCONTIG_ENTRIES   8
> >>+
> >>+/*
> >>+ * Call context. OP-TEE can issue mulitple RPC returns during one call.
> >>+ * We need to preserve context during them.
> >>+ */
> >>+struct std_call_ctx {
> >>+    struct list_head list;
> >>+    struct optee_msg_arg *guest_arg;
> >>+    struct optee_msg_arg *xen_arg;
> >>+    void *non_contig[MAX_NONCONTIG_ENTRIES];
> >>+    int non_contig_order[MAX_NONCONTIG_ENTRIES];
> >>+    int optee_thread_id;
> >>+    int rpc_op;
> >>+    domid_t domid;
> >>+};
> >>+static LIST_HEAD(call_ctx_list);
> >>+static DEFINE_SPINLOCK(call_ctx_list_lock);
> >>+
> >>+/*
> >>+ * Command buffer shared between OP-TEE and guest.
> >>+ * Warning! In the proper implementation this SHM buffer *probably* should
> >>+ * by shadowed by XEN.
> >>+ * TODO: Reconsider this.
> >>+ */
> >>+struct shm {
> >>+    struct list_head list;
> >>+    struct optee_msg_arg *guest_arg;
> >>+    struct page *guest_page;
> >>+    mfn_t guest_mfn;
> >>+    uint64_t cookie;
> >>+    domid_t domid;
> >>+};
> >>+
> >>+static LIST_HEAD(shm_list);
> >>+static DEFINE_SPINLOCK(shm_list_lock);
> >>+
> >>+static int optee_init(void)
> >>+{
> >>+    printk("OP-TEE mediator init done\n");
> >>+    return 0;
> >>+}
> >>+
> >>+static void optee_domain_create(struct domain *d)
> >>+{
> >>+    register_t resp[4];
> >>+    call_smccc_smc(OPTEE_SMC_VM_CREATED,
> >>+                   d->domain_id + 1, 0, 0, 0, 0, 0, 0, resp);
> >>+    if ( resp[0] != OPTEE_SMC_RETURN_OK )
> >>+        gprintk(XENLOG_WARNING, "OP-TEE don't want to support domain: 
> >>%d\n",
> >>+                (uint32_t)resp[0]);
> >>+    /* TODO: Change function declaration to be able to retun error */
> >>+}
> >>+
> >>+static void optee_domain_destroy(struct domain *d)
> >>+{
> >>+    register_t resp[4];
> >>+    call_smccc_smc(OPTEE_SMC_VM_DESTROYED,
> >>+                   d->domain_id + 1, 0, 0, 0, 0, 0, 0, resp);
> >>+    /* TODO: Clean call contexts and SHMs associated with domain */
> >>+}
> >>+
> >>+static bool forward_call(struct cpu_user_regs *regs)
> >>+{
> >>+    register_t resp[4];
> >>+
> >>+    /* TODO: Use separate registers set to prevent leakage to guest */
> >>+    call_smccc_smc(get_user_reg(regs, 0),
> >>+                   get_user_reg(regs, 1),
> >>+                   get_user_reg(regs, 2),
> >>+                   get_user_reg(regs, 3),
> >>+                   get_user_reg(regs, 4),
> >>+                   get_user_reg(regs, 5),
> >>+                   get_user_reg(regs, 6),
> >>+                   /* VM id 0 is reserved for hypervisor itself */
> >>+                   current->domain->domain_id + 1,
> >
> >This doesn't look like it would wrap around safely.
> 
> Well ordinary domain will always have a domain ID < DOMID_FIRST_RESERVER
> (0x7FF0). So there are no issue here. Although, we may want a BUILD_BUG_ON()
> to catch change of DOMID_FIRST_RESERVED.
> 
> >
> >
> >>+                   resp);
> >>+
> >>+    set_user_reg(regs, 0, resp[0]);
> >>+    set_user_reg(regs, 1, resp[1]);
> >>+    set_user_reg(regs, 2, resp[2]);
> >>+    set_user_reg(regs, 3, resp[3]);
> >>+
> >>+    return true;
> >>+}
> >>+
> >>+static struct std_call_ctx *allocate_std_call_ctx(void)
> >>+{
> >>+    struct std_call_ctx *ret;
> >>+
> >>+    ret = xzalloc(struct std_call_ctx);
> >>+    if ( !ret )
> >>+        return NULL;
> >>+
> >>+    ret->optee_thread_id = -1;
> 
> You set it to -1. But no-one is checking that value. So what is the purpose
> of setting to -1 and not 0?
> 
> >>+    ret->domid = -1;
> 
> Please use DOMID_INVALID rather than -1. You don't know whether the latter
> will be used in the future for a domain.
> 
> >>+
> >>+    spin_lock(&call_ctx_list_lock);
> >>+    list_add_tail(&ret->list, &call_ctx_list);
> >>+    spin_unlock(&call_ctx_list_lock);
> >>+
> >>+    return ret;
> >>+}
> >>+
> >>+static void free_std_call_ctx(struct std_call_ctx *ctx)
> >>+{
> >>+    spin_lock(&call_ctx_list_lock);
> >>+    list_del(&ctx->list);
> >>+    spin_unlock(&call_ctx_list_lock);
> >>+
> >>+    if (ctx->xen_arg)
> >>+        free_xenheap_page(ctx->xen_arg);
> >>+
> >>+    if (ctx->guest_arg)
> >>+        unmap_domain_page(ctx->guest_arg);
> >>+
> >>+    for (int i = 0; i < MAX_NONCONTIG_ENTRIES; i++) {
> >>+        if (ctx->non_contig[i])
> >>+            free_xenheap_pages(ctx->non_contig[i], 
> >>ctx->non_contig_order[i]);
> >>+    }
> >>+
> >>+    xfree(ctx);
> >>+}
> >>+
> >>+static struct std_call_ctx *find_ctx(int thread_id, domid_t domid)
> >>+{
> >>+    struct std_call_ctx *ctx;
> >>+
> >>+    spin_lock(&call_ctx_list_lock);
> >>+    list_for_each_entry( ctx, &call_ctx_list, list )
> >>+    {
> >>+        if  (ctx->domid == domid && ctx->optee_thread_id == thread_id )
> >>+        {
> >>+                spin_unlock(&call_ctx_list_lock);
> >>+                return ctx;
> >>+        }
> >>+    }
> >>+    spin_unlock(&call_ctx_list_lock);
> >>+
> >>+    return NULL;
> >>+}
> >>+
> >>+#define PAGELIST_ENTRIES_PER_PAGE                       \
> >>+    ((OPTEE_MSG_NONCONTIG_PAGE_SIZE / sizeof(u64)) - 1)
> >>+
> >>+static size_t get_pages_list_size(size_t num_entries)
> >>+{
> >>+    int pages = DIV_ROUND_UP(num_entries, PAGELIST_ENTRIES_PER_PAGE);
> >>+
> >>+    return pages * OPTEE_MSG_NONCONTIG_PAGE_SIZE;
> >>+}
> >>+
> >>+static mfn_t lookup_guest_ram_addr(paddr_t gaddr)
> >>+{
> >>+    mfn_t mfn;
> >>+    gfn_t gfn;
> >>+    p2m_type_t t;
> >>+    gfn = gaddr_to_gfn(gaddr);
> >>+    mfn = p2m_lookup(current->domain, gfn, &t);
> >>+    if ( t != p2m_ram_rw || mfn_eq(mfn, INVALID_MFN) ) {
> >>+        gprintk(XENLOG_INFO, "Domain tries to use invalid gfn\n");
> >>+        return INVALID_MFN;
> >>+    }
> >>+    return mfn;
> >>+} >> +
> >>+static struct shm *allocate_and_map_shm(paddr_t gaddr, uint64_t cookie)
> >>+{
> >>+    struct shm *ret;
> >>+
> >>+    ret = xzalloc(struct shm);
> >>+    if ( !ret )
> >>+        return NULL;
> >>+
> >>+    ret->guest_mfn = lookup_guest_ram_addr(gaddr);
> >>+
> >>+    if ( mfn_eq(ret->guest_mfn, INVALID_MFN) )
> >>+    {
> >>+        xfree(ret);
> >>+        return NULL;
> >>+    }
> >>+
> >>+    ret->guest_arg = map_domain_page(ret->guest_mfn);
> 
> map_domain_page() can never fail, but you use it the wrong way. The purpose
> of this function is to map the memory for a very short lifetime, and only a
> the current pCPU (when the all the RAM is not always mapped). Here, you seem
> to use across SMC call (e.g for RPC).
> 
> Looking at the usage in the code, you only map it in order to copy the
> arguments to/from the guest.
> 
> map_domain_page() will not take a reference on the page and prevent the page
> to disappear from the guest. So this bits is unsafe.
> 
> For the arguments, the best is to use guest copy helpers (see
> access_guest_memory_by_ipa). You might want to look at [1] as it improves
> the use of access_guest_memory_by_ipa.
> 
> >>+    if ( !ret->guest_arg )
> >>+    {
> >>+        gprintk(XENLOG_INFO, "Could not map domain page\n");
> >>+        xfree(ret);
> >>+        return NULL;
> >>+    }
> >>+    ret->cookie = cookie;
> >>+    ret->domid = current->domain->domain_id;
> >>+
> >>+    spin_lock(&shm_list_lock);
> >>+    list_add_tail(&ret->list, &shm_list);
> >>+    spin_unlock(&shm_list_lock);
> >>+    return ret;
> >>+}
> >>+
> >>+static void free_shm(uint64_t cookie, domid_t domid)
> >>+{
> >>+    struct shm *shm, *found = NULL;
> >>+    spin_lock(&shm_list_lock);
> >>+
> >>+    list_for_each_entry( shm, &shm_list, list )
> >>+    {
> >>+        if  (shm->domid == domid && shm->cookie == cookie )
> >>+        {
> >>+            found = shm;
> >>+            list_del(&found->list);
> >>+            break;
> >>+        }
> >>+    }
> >>+    spin_unlock(&shm_list_lock);
> >>+
> >>+    if ( !found ) {
> >>+        return;
> >>+    }
> >>+
> >>+    if ( found->guest_arg )
> >>+        unmap_domain_page(found->guest_arg);
> >>+
> >>+    xfree(found);
> >>+}
> >>+
> >>+static struct shm *find_shm(uint64_t cookie, domid_t domid)
> >>+{
> >>+    struct shm *shm;
> >>+
> >>+    spin_lock(&shm_list_lock);
> >>+    list_for_each_entry( shm, &shm_list, list )
> >>+    {
> >>+        if ( shm->domid == domid && shm->cookie == cookie )
> >>+        {
> >>+                spin_unlock(&shm_list_lock);
> >>+                return shm;
> >>+        }
> >>+    }
> >>+    spin_unlock(&shm_list_lock);
> >>+
> >>+    return NULL;
> >>+}
> >>+
> >>+static bool translate_noncontig(struct std_call_ctx *ctx,
> >>+                                struct optee_msg_param *param,
> >>+                                int idx)
> >>+{
> >>+    /*
> >>+     * Refer to OPTEE_MSG_ATTR_NONCONTIG description in optee_msg.h for 
> >>details.
> >>+     *
> >>+     * WARNING: This is test code. It works only with xen page size == 4096
> 
> That's a call for a BUILD_BUG_ON().
> 
> >>+     */
> >>+    uint64_t size;
> >>+    int page_offset;
> >>+    int num_pages;
> >>+    int order;
> >>+    int entries_on_page = 0;
> >>+    paddr_t gaddr;
> >>+    mfn_t guest_mfn;
> >>+    struct {
> >>+        uint64_t pages_list[PAGELIST_ENTRIES_PER_PAGE];
> >>+        uint64_t next_page_data;
> >>+    } *pages_data_guest, *pages_data_xen, *pages_data_xen_start;
> >>+
> >>+    page_offset = param->u.tmem.buf_ptr & (OPTEE_MSG_NONCONTIG_PAGE_SIZE - 
> >>1);
> >>+
> >>+    size = ROUNDUP(param->u.tmem.size + page_offset,
> >>+                   OPTEE_MSG_NONCONTIG_PAGE_SIZE);
> >>+
> >>+    num_pages = DIV_ROUND_UP(size, OPTEE_MSG_NONCONTIG_PAGE_SIZE);
> 
> What is the limit for num_pages? I can't see anything in the code that
> prevent any high number and might exhaust Xen memory.
> 
> >>+
> >>+    order = get_order_from_bytes(get_pages_list_size(num_pages));
> >>+
> >>+    pages_data_xen_start = alloc_xenheap_pages(order, 0);
> >>+    if (!pages_data_xen_start)
> >>+        return false;
> >>+
> >>+    gaddr = param->u.tmem.buf_ptr & ~(OPTEE_MSG_NONCONTIG_PAGE_SIZE - 1);
> >>+    guest_mfn = lookup_guest_ram_addr(gaddr);
> >>+    if ( mfn_eq(guest_mfn, INVALID_MFN) )
> >>+        goto err_free;
> >>+
> >>+    pages_data_guest = map_domain_page(guest_mfn);
> 
> Similarly here, you may want to use access_guest_by_ipa. This will do all
> the safety check for copy from guest memory.
> 
> Furthermore, I think this is going to simplify a lot this code.
> 
> 
> >>+    if (!pages_data_guest)
> >>+        goto err_free;
> >>+
> >>+    pages_data_xen = pages_data_xen_start;
> >>+    while ( num_pages ) {
> >>+        mfn_t entry_mfn = lookup_guest_ram_addr(
> >>+            pages_data_guest->pages_list[entries_on_page]);
> >>+
> >>+        if ( mfn_eq(entry_mfn, INVALID_MFN) )
> >>+            goto err_unmap;
> 
> You would need to get a reference on each page, and release it in err_unmap
> or when the command is done. get_page_from_gfn could do it for you.
> 
> >>+
> >>+        pages_data_xen->pages_list[entries_on_page] = 
> >>mfn_to_maddr(entry_mfn);
> >>+        entries_on_page++;
> >>+
> >>+        if ( entries_on_page == PAGELIST_ENTRIES_PER_PAGE ) {
> >>+            pages_data_xen->next_page_data = virt_to_maddr(pages_data_xen 
> >>+ 1);
> >>+            pages_data_xen++;
> >>+            gaddr = pages_data_guest->next_page_data;
> >>+            unmap_domain_page(pages_data_guest);
> >>+            guest_mfn = lookup_guest_ram_addr(gaddr);
> >>+            if ( mfn_eq(guest_mfn, INVALID_MFN) )
> >>+                goto err_free;
> >>+
> >>+            pages_data_guest = map_domain_page(guest_mfn);
> >>+            if ( !pages_data_guest )
> >>+                goto err_free;
> >>+            /* Roll over to the next page */
> >>+            entries_on_page = 0;
> >>+        }
> >>+        num_pages--;
> >>+    }
> >>+
> >>+    unmap_domain_page(pages_data_guest);
> >>+
> >>+    param->u.tmem.buf_ptr = virt_to_maddr(pages_data_xen_start) | 
> >>page_offset;
> 
> I am not sure to understand why you are apply the offset of the guest buffer
> to xen buffer.
> 
> >>+
> >>+    ctx->non_contig[idx] = pages_data_xen_start;
> >>+    ctx->non_contig_order[idx] = order;
> >>+
> >>+    unmap_domain_page(pages_data_guest);
> >>+    return true;
> >>+
> >>+err_unmap:
> >>+    unmap_domain_page(pages_data_guest);
> >>+err_free:
> >>+    free_xenheap_pages(pages_data_xen_start, order);
> >>+    return false;
> >>+}
> >>+
> >>+static bool translate_params(struct std_call_ctx *ctx)
> >>+{
> >>+    unsigned int i;
> >>+    uint32_t attr;
> >>+
> >>+    for ( i = 0; i < ctx->xen_arg->num_params; i++ ) {
> >>+        attr = ctx->xen_arg->params[i].attr;
> >>+
> >>+        switch ( attr & OPTEE_MSG_ATTR_TYPE_MASK ) {
> >>+        case OPTEE_MSG_ATTR_TYPE_TMEM_INPUT:
> >>+        case OPTEE_MSG_ATTR_TYPE_TMEM_OUTPUT:
> >>+        case OPTEE_MSG_ATTR_TYPE_TMEM_INOUT:
> >>+            if ( attr & OPTEE_MSG_ATTR_NONCONTIG ) {
> >>+                if ( !translate_noncontig(ctx, ctx->xen_arg->params + i, 
> >>i) )
> >>+                    return false;
> >>+            }
> >>+            else {
> >>+                gprintk(XENLOG_WARNING, "Guest tries to use old tmem 
> >>arg\n");
> >>+                return false;
> >>+            }
> >>+            break;
> >>+        case OPTEE_MSG_ATTR_TYPE_NONE:
> >>+        case OPTEE_MSG_ATTR_TYPE_VALUE_INPUT:
> >>+        case OPTEE_MSG_ATTR_TYPE_VALUE_OUTPUT:
> >>+        case OPTEE_MSG_ATTR_TYPE_VALUE_INOUT:
> >>+        case OPTEE_MSG_ATTR_TYPE_RMEM_INPUT:
> >>+        case OPTEE_MSG_ATTR_TYPE_RMEM_OUTPUT:
> >>+        case OPTEE_MSG_ATTR_TYPE_RMEM_INOUT:
> >>+            continue;
> >>+        }
> >>+    }
> >>+    return true;
> >>+}
> >>+
> >>+/*
> >>+ * Copy command buffer into xen memory to:
> >>+ * 1) Hide translated addresses from guest
> >>+ * 2) Make sure that guest wouldn't change data in command buffer during 
> >>call
> >>+ */
> >>+static bool copy_std_request(struct cpu_user_regs *regs,
> >>+                             struct std_call_ctx *ctx)
> >>+{
> >>+    paddr_t cmd_gaddr, xen_addr;
> >>+    mfn_t cmd_mfn;
> >>+
> >>+    cmd_gaddr = (paddr_t)get_user_reg(regs, 1) << 32 |
> >>+        get_user_reg(regs, 2);
> >>+
> >>+    /*
> >>+     * Command buffer should start at page boundary.
> >>+     * This is OP-TEE ABI requirement.
> >>+     */
> >>+    if ( cmd_gaddr & (OPTEE_MSG_NONCONTIG_PAGE_SIZE - 1) )
> >>+        return false;
> >>+
> >>+    cmd_mfn = lookup_guest_ram_addr(cmd_gaddr);
> >>+    if ( mfn_eq(cmd_mfn, INVALID_MFN) )
> >>+        return false;
> >>+
> >>+    ctx->guest_arg = map_domain_page(cmd_mfn);
> >>+    if ( !ctx->guest_arg )
> >>+        return false;
> >>+
> >>+    ctx->xen_arg = alloc_xenheap_page();
> >>+    if ( !ctx->xen_arg )
> >>+        return false;
> >>+
> >>+    memcpy(ctx->xen_arg, ctx->guest_arg, OPTEE_MSG_NONCONTIG_PAGE_SIZE);
> 
> Have a look a guest copy helpers.
> 
> Cheers,
> 
> [1]
> https://lists.xenproject.org/archives/html/xen-devel/2017-11/msg01481.html
> 
> -- 
> Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.