[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v7 3/6] xen/arm: ffa: Introduce VM to VM support


  • To: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Bertrand Marquis <Bertrand.Marquis@xxxxxxx>
  • Date: Fri, 1 Aug 2025 07:10:27 +0000
  • Accept-language: en-GB, en-US
  • Arc-authentication-results: i=2; mx.microsoft.com 1; spf=pass (sender ip is 4.158.2.129) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=pass (signature was verified) header.d=arm.com; arc=pass (0 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com] dmarc=[1,1,header.from=arm.com])
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none
  • Arc-message-signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=fkzy3umRnmqtfIGARryYAc4SJikWAU+EFjxAQcqS3Qg=; b=NWmgzcgNG8+lB7bJqEdjLoPcBS6RGdfiE/3brtvCIsvEbkb5jbpVyGJ15Q1TMUl+o3w+v2HXLVLE8pI8wEE1jNp5TgcINlGtMFPyQw5DKFT07ztCOGIMj9J8TZ5opiDDow3S2tVDQiV/I3t75xmGn5xv3+DqIqWphlszWdrNBskWGH/vFj8Ud3FHWPBREtksfi2wu3M7cBwU6JOR02dUx5F0u55AHJdPJ4rJDmpqUFBxnR+ZRYK9Euu2/7d5BOEdIoVWN3ZpmqYOE/Qec0KP+5bVhnoBlcCzwzEQ6EyxLnbgbU1ambMX1bOd591Hf40Y9t0KGyrdfbfkeM/+NycpWQ==
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=fkzy3umRnmqtfIGARryYAc4SJikWAU+EFjxAQcqS3Qg=; b=CQveCUl7K51L5OyXaj4+Ko+KFT+zk/DsmY4WqtyZa0cpXq5SKfk1sYjYYrdoLmNlBoEZTOLSSMoEuhJOugkwMNFryLRyacJPhp+3lkzRc/jS6c5JSGHdJt7eLnjytB5vnQAMRqfhlmB+XAgZvaSlxYIPPST+WhuobNXSoarcljnB2yBJSuvM/GVRrgkLPDsGNDctu9oesTYR6XdHYjWOuUcYE+wj82rzFw3K9N8vIWvpRf0AJKP2K4tCH+fDfmZDlMPG29Fsr1NSbzCyrfsvAMYayK7+rgyY+iu/K388xIskXa71kZVpNdNpX4a53Zlnzf1OGq13a73K5KZDYOOwlA==
  • Arc-seal: i=2; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=pass; b=p+GXfYzy6PSKJZo+e2ouzmtuaOfdpccth5BVwfmLqvG5YNR/fSlSM2fBrq9j8tNKuXyoo4LPv+r751WTnWIcxEqAW7Gldj+fYwBRmbyjZaooR2RUt1cmNk15QT2UpxrP6nmYpcGblx3s1phqqG2MBvC7AwxJKQEIGErB2jV5gGJBbqc8pkjYJgjU4hML/NHmbCPiw6TTjBMWMOR7PtsX2dVKKRZwVUoGnqebZ2vcIytN2WIVLDJJVhvdvwnGylrnVpGKNbJO08HUIrKSAH0u3z5wfGafDWW1hCJBKy8QxDQ37kJ+6ruRBUN6QKs/hrUTLdT3mD0wEkt32cqLP75IJQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=cJ7S5MT41L6slKCJJdyNo6kUHd+sVQXMN3ugvW4+Zm+A5FMlUjHwwRGwgrRa3Io0yV/NdH0qejdUjqvDL9TDWFR/z6dywqxjYUbYdWW3IWt+F5Xe52z3jZlKTpSyYKqkQJFYat8MXNs5F7d+tEfZpp453RwWaaO7lIoAn9m2RxfMBzcQqvDEMh5eaAyo8jXhbpqFh4sq+ykH2J4RdowMFz6AhPmQXRa7S2V/RA1i5TNdMSLycoc3tzAb+EWE9RTmCPhlTioZ3G0ht0fkVm4mTLw58fIxdyMzLai4C0oLG9cU/swjloYFqCi3nncFmSCjZxY7P2MHq8LJJi/w4LdzHw==
  • Authentication-results-original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com;
  • Cc: "jens.wiklander@xxxxxxxxxx" <jens.wiklander@xxxxxxxxxx>, Volodymyr Babchuk <volodymyr_babchuk@xxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>
  • Delivery-date: Fri, 01 Aug 2025 07:11:30 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Nodisclaimer: true
  • Thread-index: AQHb9xP9tgnM92Sxpk+5sdmAsJMirrRNeIKA
  • Thread-topic: [PATCH v7 3/6] xen/arm: ffa: Introduce VM to VM support

Hi,

A gentle ping for someone to review this :-)

Thanks
Bertrand

> On 17 Jul 2025, at 14:11, Bertrand Marquis <Bertrand.Marquis@xxxxxxx> wrote:
> 
> Create a CONFIG_FFA_VM_TO_VM parameter to activate FFA communication
> between VMs.
> When activated list VMs in the system with FF-A support in part_info_get.
> 
> When VM to VM is activated, Xen will be tainted as Insecure and a
> message is displayed to the user during the boot as there is no
> filtering of VMs in FF-A so any VM can communicate or see any other VM
> in the system.
> 
> WARNING: There is no filtering for now and all VMs are listed !!
> 
> This patch is reorganizing the ffa_ctx structure to make clear which
> lock is protecting what parts.
> 
> This patch is introducing a chain list of the ffa_ctx with a FFA Version
> negociated allowing to create the partinfo results for VMs in parallel
> by using rwlock which only ensure addition/removal of entries are
> protected.
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@xxxxxxx>
> ---
> Changes in v7:
> - protect ffa_ctx list with a rw lock to allow several partinfo_get in
>  parallel but protect adding/removing entries.
> Changes in v6:
> - remove ACCESS_ONCE for guest_vers access and take the context lock
>  before modifying it
> - move guest_vers in context declaration to fields protected by the
>  context lock and add a comment to state that lock in only needed when
>  modifying it
> Changes in v5:
> - remove invalid comment about 1.1 firmware support
> - rename variables from d and dom to curr_d and dest_d (Julien)
> - add a TODO in the code for potential holding for long of the CPU
>  (Julien)
> - use an atomic global variable to store the number of VMs instead of
>  recomputing the value each time (Julien)
> - add partinfo information in ffa_ctx (id, cpus and 64bit) and create a
>  chain list of ctx. Use this chain list to create the partinfo result
>  without holding a global lock to prevent concurrency issues.
> - Move some changes in a preparation patch modifying partinfo for sps to
>  reduce this patch size and make the review easier
> Changes in v4:
> - properly handle SPMC version 1.0 header size case in partinfo_get
> - switch to local counting variables instead of *pointer += 1 form
> - coding style issue with missing spaces in if ()
> Changes in v3:
> - break partinfo_get in several sub functions to make the implementation
>  easier to understand and lock handling easier
> - rework implementation to check size along the way and prevent previous
>  implementation limits which had to check that the number of VMs or SPs
>  did not change
> - taint Xen as INSECURE when VM to VM is enabled
> Changes in v2:
> - Switch ifdef to IS_ENABLED
> - dom was not switched to d as requested by Jan because there is already
>  a variable d pointing to the current domain and it must not be
>  shadowed.
> ---
> xen/arch/arm/tee/Kconfig        |  11 +++
> xen/arch/arm/tee/ffa.c          |  47 +++++++++++++
> xen/arch/arm/tee/ffa_partinfo.c | 100 ++++++++++++++++++++++++---
> xen/arch/arm/tee/ffa_private.h  | 117 ++++++++++++++++++++++++++------
> 4 files changed, 245 insertions(+), 30 deletions(-)
> 
> diff --git a/xen/arch/arm/tee/Kconfig b/xen/arch/arm/tee/Kconfig
> index c5b0f88d7522..88a4c4c99154 100644
> --- a/xen/arch/arm/tee/Kconfig
> +++ b/xen/arch/arm/tee/Kconfig
> @@ -28,5 +28,16 @@ config FFA
> 
>  [1] https://developer.arm.com/documentation/den0077/latest
> 
> +config FFA_VM_TO_VM
> +    bool "Enable FF-A between VMs (UNSUPPORTED)" if UNSUPPORTED
> +    default n
> +    depends on FFA
> +    help
> +      This option enables to use FF-A between VMs.
> +      This is experimental and there is no access control so any
> +      guest can communicate with any other guest.
> +
> +      If unsure, say N.
> +
> endmenu
> 
> diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
> index 3bbdd7168a6b..be71eda4869f 100644
> --- a/xen/arch/arm/tee/ffa.c
> +++ b/xen/arch/arm/tee/ffa.c
> @@ -118,6 +118,13 @@ void *ffa_tx __read_mostly;
> DEFINE_SPINLOCK(ffa_rx_buffer_lock);
> DEFINE_SPINLOCK(ffa_tx_buffer_lock);
> 
> +struct list_head ffa_ctx_head;
> +/* RW Lock to protect addition/removal and reading in ffa_ctx_head */
> +rwlock_t ffa_ctx_list_rwlock;
> +
> +#ifdef CONFIG_FFA_VM_TO_VM
> +atomic_t ffa_vm_count;
> +#endif
> 
> /* Used to track domains that could not be torn down immediately. */
> static struct timer ffa_teardown_timer;
> @@ -151,6 +158,7 @@ static void handle_version(struct cpu_user_regs *regs)
>     struct domain *d = current->domain;
>     struct ffa_ctx *ctx = d->arch.tee;
>     uint32_t vers = get_user_reg(regs, 1);
> +    uint32_t old_vers;
> 
>     /*
>      * Guest will use the version it requested if it is our major and minor
> @@ -160,10 +168,23 @@ static void handle_version(struct cpu_user_regs *regs)
>      */
>     if ( FFA_VERSION_MAJOR(vers) == FFA_MY_VERSION_MAJOR )
>     {
> +        spin_lock(&ctx->lock);
> +        old_vers = ctx->guest_vers;
> +
>         if ( FFA_VERSION_MINOR(vers) > FFA_MY_VERSION_MINOR )
>             ctx->guest_vers = FFA_MY_VERSION;
>         else
>             ctx->guest_vers = vers;
> +        spin_unlock(&ctx->lock);
> +
> +        if ( IS_ENABLED(CONFIG_FFA_VM_TO_VM) && !old_vers )
> +        {
> +            /* One more VM with FF-A support available */
> +            inc_ffa_vm_count();
> +            write_lock(&ffa_ctx_list_rwlock);
> +            list_add_tail(&ctx->ctx_list, &ffa_ctx_head);
> +            write_unlock(&ffa_ctx_list_rwlock);
> +        }
>     }
>     ffa_set_regs(regs, FFA_MY_VERSION, 0, 0, 0, 0, 0, 0, 0);
> }
> @@ -345,6 +366,10 @@ static int ffa_domain_init(struct domain *d)
>     ctx->teardown_d = d;
>     INIT_LIST_HEAD(&ctx->shm_list);
> 
> +    ctx->ffa_id = ffa_get_vm_id(d);
> +    ctx->num_vcpus = d->max_vcpus;
> +    ctx->is_64bit = is_64bit_domain(d);
> +
>     /*
>      * ffa_domain_teardown() will be called if ffa_domain_init() returns an
>      * error, so no need for cleanup in this function.
> @@ -421,6 +446,14 @@ static int ffa_domain_teardown(struct domain *d)
>     if ( !ctx )
>         return 0;
> 
> +    if ( IS_ENABLED(CONFIG_FFA_VM_TO_VM) && ctx->guest_vers )
> +    {
> +        dec_ffa_vm_count();
> +        write_lock(&ffa_ctx_list_rwlock);
> +        list_del(&ctx->ctx_list);
> +        write_unlock(&ffa_ctx_list_rwlock);
> +    }
> +
>     ffa_rxtx_domain_destroy(d);
>     ffa_notif_domain_destroy(d);
> 
> @@ -464,6 +497,18 @@ static bool ffa_probe(void)
>     printk(XENLOG_INFO "ARM FF-A Mediator version %u.%u\n",
>            FFA_MY_VERSION_MAJOR, FFA_MY_VERSION_MINOR);
> 
> +    if ( IS_ENABLED(CONFIG_FFA_VM_TO_VM) )
> +    {
> +        /*
> +         * When FFA VM to VM is enabled, the current implementation does not
> +         * offer any way to limit which VM can communicate with which VM 
> using
> +         * FF-A.
> +         * Signal this in the xen console and taint the system as insecure.
> +         * TODO: Introduce a solution to limit what a VM can do through FFA.
> +         */
> +        printk(XENLOG_ERR "ffa: VM to VM is enabled, system is insecure 
> !!\n");
> +        add_taint(TAINT_MACHINE_INSECURE);
> +    }
>     /*
>      * psci_init_smccc() updates this value with what's reported by EL-3
>      * or secure world.
> @@ -538,6 +583,8 @@ static bool ffa_probe(void)
> 
>     ffa_notif_init();
>     INIT_LIST_HEAD(&ffa_teardown_head);
> +    INIT_LIST_HEAD(&ffa_ctx_head);
> +    rwlock_init(&ffa_ctx_list_rwlock);
>     init_timer(&ffa_teardown_timer, ffa_teardown_timer_callback, NULL, 0);
> 
>     return true;
> diff --git a/xen/arch/arm/tee/ffa_partinfo.c b/xen/arch/arm/tee/ffa_partinfo.c
> index dfa0b23eaf38..fa56b1587e3b 100644
> --- a/xen/arch/arm/tee/ffa_partinfo.c
> +++ b/xen/arch/arm/tee/ffa_partinfo.c
> @@ -150,6 +150,73 @@ out:
>     return ret;
> }
> 
> +static int32_t ffa_get_vm_partinfo(uint32_t *vm_count, void *dst_buf,
> +                                   void *end_buf, uint32_t dst_size)
> +{
> +    struct ffa_ctx *curr_ctx = current->domain->arch.tee;
> +    struct ffa_ctx *dest_ctx;
> +    uint32_t count = 0;
> +    int32_t ret = FFA_RET_OK;
> +
> +    /*
> +     * There could potentially be a lot of VMs in the system and we could
> +     * hold the CPU for long here.
> +     * Right now there is no solution in FF-A specification to split
> +     * the work in this case.
> +     * TODO: Check how we could delay the work or have preemption checks.
> +     */
> +    read_lock(&ffa_ctx_list_rwlock);
> +    list_for_each_entry(dest_ctx, &ffa_ctx_head, ctx_list)
> +    {
> +        /*
> +         * Do not include an entry for the caller VM as the spec is not
> +         * clearly mandating it and it is not supported by Linux.
> +         */
> +        if ( dest_ctx != curr_ctx )
> +        {
> +            /*
> +             * We do not have UUID info for VMs so use
> +             * the 1.0 structure so that we set UUIDs to
> +             * zero using memset
> +             */
> +            struct ffa_partition_info_1_0 info;
> +
> +            if  ( dst_buf > (end_buf - dst_size) )
> +            {
> +                ret = FFA_RET_NO_MEMORY;
> +                goto out;
> +            }
> +
> +            /*
> +             * Context might has been removed since we go it or being removed
> +             * right now so we might return information for a VM not existing
> +             * anymore. This is acceptable as we return a view of the system
> +             * which could change at any time.
> +             */
> +            info.id = dest_ctx->ffa_id;
> +            info.execution_context = dest_ctx->num_vcpus;
> +            info.partition_properties = FFA_PART_VM_PROP;
> +            if ( dest_ctx->is_64bit )
> +                info.partition_properties |= FFA_PART_PROP_AARCH64_STATE;
> +
> +            memcpy(dst_buf, &info, MIN(sizeof(info), dst_size));
> +
> +            if ( dst_size > sizeof(info) )
> +                memset(dst_buf + sizeof(info), 0,
> +                       dst_size - sizeof(info));
> +
> +            dst_buf += dst_size;
> +            count++;
> +        }
> +    }
> +    *vm_count = count;
> +
> +out:
> +    read_unlock(&ffa_ctx_list_rwlock);
> +
> +    return ret;
> +}
> +
> void ffa_handle_partition_info_get(struct cpu_user_regs *regs)
> {
>     int32_t ret = FFA_RET_OK;
> @@ -164,7 +231,7 @@ void ffa_handle_partition_info_get(struct cpu_user_regs 
> *regs)
>     };
>     uint32_t dst_size = 0;
>     void *dst_buf, *end_buf;
> -    uint32_t ffa_sp_count = 0;
> +    uint32_t ffa_vm_count = 0, ffa_sp_count = 0;
> 
>     /*
>      * If the guest is v1.0, he does not get back the entry size so we must
> @@ -191,15 +258,18 @@ void ffa_handle_partition_info_get(struct cpu_user_regs 
> *regs)
>         }
> 
>         if ( ffa_fw_supports_fid(FFA_PARTITION_INFO_GET) )
> +        {
>             ret = ffa_get_sp_count(uuid, &ffa_sp_count);
> +            if ( ret )
> +                goto out;
> +        }
> 
> -        goto out;
> -    }
> +        /*
> +         * Do not count the caller VM as the spec is not clearly mandating it
> +         * and it is not supported by Linux.
> +         */
> +        ffa_vm_count = get_ffa_vm_count() - 1;
> 
> -    if ( !ffa_fw_supports_fid(FFA_PARTITION_INFO_GET) )
> -    {
> -        /* Just give an empty partition list to the caller */
> -        ret = FFA_RET_OK;
>         goto out;
>     }
> 
> @@ -224,9 +294,19 @@ void ffa_handle_partition_info_get(struct cpu_user_regs 
> *regs)
>         goto out_rx_release;
>     }
> 
> -    ret = ffa_get_sp_partinfo(uuid, &ffa_sp_count, dst_buf, end_buf,
> -                              dst_size);
> +    if ( ffa_fw_supports_fid(FFA_PARTITION_INFO_GET) )
> +    {
> +        ret = ffa_get_sp_partinfo(uuid, &ffa_sp_count, dst_buf, end_buf,
> +                                  dst_size);
> +
> +        if ( ret )
> +            goto out_rx_release;
> +
> +        dst_buf += ffa_sp_count * dst_size;
> +    }
> 
> +    if ( IS_ENABLED(CONFIG_FFA_VM_TO_VM) )
> +        ret = ffa_get_vm_partinfo(&ffa_vm_count, dst_buf, end_buf, dst_size);
> 
> out_rx_release:
>     if ( ret )
> @@ -235,7 +315,7 @@ out:
>     if ( ret )
>         ffa_set_regs_error(regs, ret);
>     else
> -        ffa_set_regs_success(regs, ffa_sp_count, dst_size);
> +        ffa_set_regs_success(regs, ffa_sp_count + ffa_vm_count, dst_size);
> }
> 
> static int32_t ffa_direct_req_send_vm(uint16_t sp_id, uint16_t vm_id,
> diff --git a/xen/arch/arm/tee/ffa_private.h b/xen/arch/arm/tee/ffa_private.h
> index 0a9c1082db28..1a1dcabcdc28 100644
> --- a/xen/arch/arm/tee/ffa_private.h
> +++ b/xen/arch/arm/tee/ffa_private.h
> @@ -195,6 +195,18 @@
>  */
> #define FFA_PARTITION_INFO_GET_COUNT_FLAG BIT(0, U)
> 
> +/*
> + * Partition properties we give for a normal world VM:
> + * - can send direct message but not receive them
> + * - can handle indirect messages
> + * - can receive notifications
> + * 32/64 bit flag is set depending on the VM
> + */
> +#define FFA_PART_VM_PROP    (FFA_PART_PROP_DIRECT_REQ_SEND | \
> +                             FFA_PART_PROP_INDIRECT_MSGS | \
> +                             FFA_PART_PROP_RECV_NOTIF | \
> +                             FFA_PART_PROP_IS_PE_ID)
> +
> /* Flags used in calls to FFA_NOTIFICATION_GET interface  */
> #define FFA_NOTIF_FLAG_BITMAP_SP        BIT(0, U)
> #define FFA_NOTIF_FLAG_BITMAP_VM        BIT(1, U)
> @@ -297,36 +309,72 @@ struct ffa_ctx_notif {
> };
> 
> struct ffa_ctx {
> -    void *rx;
> -    const void *tx;
> -    struct page_info *rx_pg;
> -    struct page_info *tx_pg;
> +    /*
> +     * Chain list of all FF-A contexts.
> +     * As we might have several read from the list of context through 
> parallel
> +     * partinfo_get but fewer additions/removal as those happen only during a
> +     * version negociation or guest shutdown, access to this list is 
> protected
> +     * through a rwlock (addition/removal with write lock, reading through a
> +     * read lock).
> +     */
> +    struct list_head ctx_list; /* chain list of all FF-A contexts */
> +
> +    /*
> +     * Data access unlocked (mainly for part_info_get in VM to VM).
> +     * Those should be set before the ctx is added in the list.
> +     */
> +    /* FF-A Endpoint ID */
> +    uint16_t ffa_id;
> +    uint16_t num_vcpus;
> +    bool is_64bit;
> +
> +    /*
> +     * Global data accessed atomically or using ACCES_ONCE.
> +     */
> +    struct ffa_ctx_notif notif;
> +
> +    /*
> +     * Global data accessed with lock locked.
> +     */
> +    spinlock_t lock;
> +    /*
> +     * FF-A version negociated by the guest, only modifications to
> +     * this field are done with the lock held as this is expected to
> +     * be done once at init by a guest.
> +     */
> +    uint32_t guest_vers;
>     /* Number of 4kB pages in each of rx/rx_pg and tx/tx_pg */
>     unsigned int page_count;
> -    /* FF-A version used by the guest */
> -    uint32_t guest_vers;
> -    bool rx_is_free;
> -    /* Used shared memory objects, struct ffa_shm_mem */
> -    struct list_head shm_list;
>     /* Number of allocated shared memory object */
>     unsigned int shm_count;
> -    struct ffa_ctx_notif notif;
> +    /* Used shared memory objects, struct ffa_shm_mem */
> +    struct list_head shm_list;
> +
>     /*
> -     * tx_lock is used to serialize access to tx
> -     * rx_lock is used to serialize access to rx_is_free
> -     * lock is used for the rest in this struct
> +     * Rx buffer, accessed with rx_lock locked.
> +     * rx_is_free is used to serialize access.
>      */
> -    spinlock_t tx_lock;
>     spinlock_t rx_lock;
> -    spinlock_t lock;
> -    /* Used if domain can't be torn down immediately */
> +    bool rx_is_free;
> +    void *rx;
> +    struct page_info *rx_pg;
> +
> +    /*
> +     * Tx buffer, access with tx_lock locked.
> +     */
> +    spinlock_t tx_lock;
> +    const void *tx;
> +    struct page_info *tx_pg;
> +
> +
> +    /*
> +     * Domain teardown handling if data shared or used by other domains
> +     * do not allow to teardown the domain immediately.
> +     */
>     struct domain *teardown_d;
>     struct list_head teardown_list;
>     s_time_t teardown_expire;
> -    /*
> -     * Used for ffa_domain_teardown() to keep track of which SPs should be
> -     * notified that this guest is being destroyed.
> -     */
> +    /* Keep track of SPs that should be notified of VM destruction */
>     unsigned long *vm_destroy_bitmap;
> };
> 
> @@ -336,6 +384,12 @@ extern spinlock_t ffa_rx_buffer_lock;
> extern spinlock_t ffa_tx_buffer_lock;
> extern DECLARE_BITMAP(ffa_fw_abi_supported, FFA_ABI_BITMAP_SIZE);
> 
> +extern struct list_head ffa_ctx_head;
> +extern rwlock_t ffa_ctx_list_rwlock;
> +#ifdef CONFIG_FFA_VM_TO_VM
> +extern atomic_t ffa_vm_count;
> +#endif
> +
> bool ffa_shm_domain_destroy(struct domain *d);
> void ffa_handle_mem_share(struct cpu_user_regs *regs);
> int ffa_handle_mem_reclaim(uint64_t handle, uint32_t flags);
> @@ -368,6 +422,29 @@ int ffa_handle_notification_set(struct cpu_user_regs 
> *regs);
> void ffa_handle_msg_send_direct_req(struct cpu_user_regs *regs, uint32_t fid);
> int32_t ffa_handle_msg_send2(struct cpu_user_regs *regs);
> 
> +#ifdef CONFIG_FFA_VM_TO_VM
> +static inline uint16_t get_ffa_vm_count(void)
> +{
> +    return atomic_read(&ffa_vm_count);
> +}
> +
> +static inline void inc_ffa_vm_count(void)
> +{
> +    atomic_inc(&ffa_vm_count);
> +}
> +
> +static inline void dec_ffa_vm_count(void)
> +{
> +    ASSERT(atomic_read(&ffa_vm_count) > 0);
> +    atomic_dec(&ffa_vm_count);
> +}
> +#else
> +/* Only count the caller VM */
> +#define get_ffa_vm_count()  ((uint16_t)1UL)
> +#define inc_ffa_vm_count()  do {} while(0)
> +#define dec_ffa_vm_count()  do {} while(0)
> +#endif
> +
> static inline uint16_t ffa_get_vm_id(const struct domain *d)
> {
>     /* +1 since 0 is reserved for the hypervisor in FF-A */
> -- 
> 2.47.1
> 
> 




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.