[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [PATCH v1 03/13] xen/arm: allocate static shared memory to dom_shared



On Fri, 18 Mar 2022, Penny Zheng wrote:
> > On Fri, 11 Mar 2022, Penny Zheng wrote:
> > > From: Penny Zheng <penny.zheng@xxxxxxx>
> > >
> > > This commit introduces process_shm to cope with static shared memory
> > > in domain construction.
> > >
> > > This commit only considers allocating static shared memory to
> > > dom_shared when owner domain is not explicitly defined in device tree,
> > > the other scenario will be covered in the following patches.
> > >
> > > Static shared memory could reuse acquire_static_memory_bank() to
> > > acquire and allocate static memory.
> > >
> > > Signed-off-by: Penny Zheng <penny.zheng@xxxxxxx>
> > > ---
> > >  xen/arch/arm/domain_build.c | 116
> > > +++++++++++++++++++++++++++++++++++-
> > >  1 file changed, 115 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> > > index 8be01678de..6e6349caac 100644
> > > --- a/xen/arch/arm/domain_build.c
> > > +++ b/xen/arch/arm/domain_build.c
> > > @@ -527,7 +527,8 @@ static mfn_t __init
> > acquire_static_memory_bank(struct domain *d,
> > >      mfn_t smfn;
> > >      int res;
> > >
> > > -    device_tree_get_reg(cell, addr_cells, size_cells, pbase, psize);
> > > +    if ( cell )
> > > +        device_tree_get_reg(cell, addr_cells, size_cells, pbase,
> > > + psize);
> > 
> > Why this change?
> > 
> 
> This helper is also used for acquiring static memory as guest RAM for 
> statically configured
> domain.
> 
> And since we are reusing it for static shared memory, but try to avoid 
> parsing the property
> here, the "xen,static-shm" property getting parsed in different ways in 
> process_shm.
> So this change is needed here.
> 
> And I think I need to add in-code comment to explain. ;)
> 
> > 
> > >      ASSERT(IS_ALIGNED(*pbase, PAGE_SIZE) && IS_ALIGNED(*psize,
> > PAGE_SIZE));
> > >      if ( PFN_DOWN(*psize) > UINT_MAX )
> > >      {
> > > @@ -751,6 +752,113 @@ static void __init assign_static_memory_11(struct
> > domain *d,
> > >      panic("Failed to assign requested static memory for direct-map
> > domain %pd.",
> > >            d);
> > >  }
> > > +
> > > +#ifdef CONFIG_STATIC_SHM
> > > +static __initdata DECLARE_BITMAP(shm_mask, NR_MEM_BANKS);
> > > +
> > > +static mfn_t __init acquire_shared_memory_bank(struct domain *d,
> > > +                                               u32 addr_cells, u32 
> > > size_cells,
> > > +                                               paddr_t *pbase,
> > > +paddr_t *psize) {
> > > +    /*
> > > +     * Pages of statically shared memory shall be included
> > > +     * in domain_tot_pages().
> > > +     */
> > > +    d->max_pages += PFN_DOWN(*psize);
> > > +
> > > +    return acquire_static_memory_bank(d, NULL, addr_cells, size_cells,
> > > +                                      pbase, psize);
> > > +
> > > +}
> > > +
> > > +static int __init allocate_shared_memory(struct domain *d,
> > > +                                         u32 addr_cells, u32 size_cells,
> > > +                                         paddr_t pbase, paddr_t psize,
> > > +                                         paddr_t gbase) {
> > > +    mfn_t smfn;
> > > +    int ret = 0;
> > > +
> > > +    printk(XENLOG_INFO "Allocate static shared memory
> > BANK %#"PRIpaddr"-%#"PRIpaddr"\n",
> > > +           pbase, pbase + psize);
> > > +
> > > +    smfn = acquire_shared_memory_bank(d, addr_cells, size_cells, &pbase,
> > > +                                      &psize);
> > > +    if ( mfn_eq(smfn, INVALID_MFN) )
> > > +        return -EINVAL;
> > > +
> > > +    ret = guest_physmap_add_pages(d, gaddr_to_gfn(gbase), smfn,
> > PFN_DOWN(psize));
> > > +    if ( ret )
> > > +    {
> > > +        dprintk(XENLOG_ERR, "Failed to map shared memory to %pd.\n", d);
> > > +        return ret;
> > > +    }
> > > +
> > > +    return ret;
> > > +}
> > > +
> > > +static int __init process_shm(struct domain *d,
> > > +                              const struct dt_device_node *node) {
> > > +    struct dt_device_node *shm_node;
> > > +    int ret = 0;
> > > +    const struct dt_property *prop;
> > > +    const __be32 *cells;
> > > +    u32 shm_id;
> > > +    u32 addr_cells, size_cells;
> > > +    paddr_t gbase, pbase, psize;
> > > +
> > > +    dt_for_each_child_node(node, shm_node)
> > > +    {
> > > +        if ( !dt_device_is_compatible(shm_node, 
> > > "xen,domain-shared-memory-
> > v1") )
> > > +            continue;
> > > +
> > > +        if ( !dt_property_read_u32(shm_node, "xen,shm-id", &shm_id) )
> > > +        {
> > > +            printk("Shared memory node does not provide \"xen,shm-id\"
> > property.\n");
> > > +            return -ENOENT;
> > > +        }
> > > +
> > > +        addr_cells = dt_n_addr_cells(shm_node);
> > > +        size_cells = dt_n_size_cells(shm_node);
> > > +        prop = dt_find_property(shm_node, "xen,shared-mem", NULL);
> > > +        if ( !prop )
> > > +        {
> > > +            printk("Shared memory node does not provide 
> > > \"xen,shared-mem\"
> > property.\n");
> > > +            return -ENOENT;
> > > +        }
> > > +        cells = (const __be32 *)prop->value;
> > > +        /* xen,shared-mem = <pbase, psize, gbase>; */
> > > +        device_tree_get_reg(&cells, addr_cells, size_cells, &pbase, 
> > > &psize);
> > > +        ASSERT(IS_ALIGNED(pbase, PAGE_SIZE) && IS_ALIGNED(psize,
> > PAGE_SIZE));
> > > +        gbase = dt_read_number(cells, addr_cells);
> > > +
> > > +        /* TODO: Consider owner domain is not the default dom_shared. */
> > > +        /*
> > > +         * Per shared memory region could be shared between multiple
> > domains.
> > > +         * In case re-allocating the same shared memory region, we use 
> > > bitmask
> > > +         * shm_mask to record whether this shared memory region has ever
> > been
> > > +         * allocated already.
> > > +         */
> > > +        if ( !test_bit(shm_id, shm_mask) )
> > > +        {
> > > +            /*
> > > +             * Allocate statically shared pages to the default 
> > > dom_shared.
> > > +             * Set up P2M, and dom_shared is a direct-map domain,
> > > +             * so GFN == PFN.
> > > +             */
> > > +            ret = allocate_shared_memory(dom_shared, addr_cells, 
> > > size_cells,
> > > +                                         pbase, psize, pbase);
> >                                                           ^gbase
> > 
> > The last parameter should be gbase instead of pbase?
> > 
> 
> Yes, and since dom_shared is a direct-map domain, GFN == PFN, so pbase should 
> be
> ok here. I mentioned it on comments.
> 
> And Why I make dom_shared direct-map domain is that in this way we don't need 
> to decode
> owner GFN when doing foreign memory mapping for borrower domain.
> 
> > 
> > Reading this patch is not clear that only the "owner" code path is
> > implemented here. The "borrower" code path is implemented later and
> > missing in this patch. I think it would be good to clarify that in the 
> > commit
> > message.
> > 
> > Under this light, allocate_shared_memory is supposed to be only called for 
> > the
> > owner. I think we should probably mention that in the in-code comment too.
> > 
> 
> Yes, only owner domain could allocate memory, I'll add it to in-code comment.
> 
> > I don't think we need to define a second copy of shm_mask. Can we reuse the
> > one in bootfdt.c?
> > 
> 
> Yes, maybe I should reuse than reintroduce. And before using the bitmap here,
> I need to clear it totally to clean away all the stale info from bootfdt.c.
> 
> > Or we could get rid of shm_mask entirely here if we check whether the pages
> > were already allocated in the owner p2m.
> > 
> > 
> 
> Hmm, that means that we need to do the page walk each time? That's kinds of
> time-consuming, or am I missing some convenient way to check?

No page walk. I think it should be possible with:

- mfn_to_page
- page_get_owner

both of them are direct access. Assuming that the page owner is set
correctly.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.