[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC 18/22] xen/arm: p2m: Introduce p2m_set_entry and __p2m_set_entry



On Tue, 6 Sep 2016, Julien Grall wrote:
> > > +    if ( !p2m_valid(entry) || p2m_is_superpage(entry, level) )
> > > +        return;
> > > +
> > > +    if ( level == 3 )
> > > +    {
> > > +        p2m_put_l3_page(_mfn(entry.p2m.base), entry.p2m.type);
> > > +        return;
> > > +    }
> > > +
> > > +    table = map_domain_page(_mfn(entry.p2m.base));
> > > +    for ( i = 0; i < LPAE_ENTRIES; i++ )
> > > +        p2m_free_entry(p2m, *(table + i), level + 1);
> > > +
> > > +    unmap_domain_page(table);
> > > +
> > > +    /*
> > > +     * Make sure all the references in the TLB have been removed before
> > > +     * freing the intermediate page table.
> > > +     * XXX: Should we defer the free of the page table to avoid the
> > > +     * flush?
> > 
> > Yes, I would probably move the p2m flush out of this function.
> 
> We would need to also defer the free of the intermediate table (see
> free_domheap_page below) that means keeping track of the pages to free later.
> 
> The flush here will only happen once and iff a sub-tree is removed (i.e
> unlikely in most of the cases).
> 
> So I am not sure it is worth to keep a list of page to free later. Any
> opinions?

Leave it for now. In addition flushing the p2m by address would probably
be enough -- it would reduce the benefits of deferring the flush here.


> > > +     */
> > > +    ASSERT(!p2m->mem_access_enabled || page_order == 0);
> > > +    /*
> > > +     * The access type should always be p2m_access_rwx when the mapping
> > > +     * is removed.
> > > +     */
> > > +    ASSERT(!mfn_eq(INVALID_MFN, smfn) || (a == p2m_access_rwx));
> > > +    /*
> > > +     * Update the mem access permission before update the P2M. So we
> > > +     * don't have to revert the mapping if it has failed.
> > > +     */
> > > +    rc = p2m_mem_access_radix_set(p2m, sgfn, a);
> > > +    if ( rc )
> > > +        goto out;
> > > +
> > > +    /*
> > > +     * Always remove the entry in order to follow the break-before-make
> > > +     * sequence when updating the translation table (D4.7.1 in ARM DDI
> > > +     * 0487A.j).
> > > +     */
> > > +    if ( p2m_valid(orig_pte) )
> > > +        p2m_remove_pte(entry, p2m->clean_pte);
> > > +
> > > +    if ( mfn_eq(smfn, INVALID_MFN) )
> > > +        /* Flush can be deferred if the entry is removed */
> > > +        p2m->need_flush |= !!p2m_valid(orig_pte);
> > > +    else
> > > +    {
> > > +        lpae_t pte;
> > > +
> > > +        /*
> > > +         * Flush the TLB before write the new one to keep coherency.
> > > +         * XXX: Can we flush by address?
> > 
> > I was asking myself the same question
> 
> It is not trivial. On ARMv7, there is no way to invalidate by IPA, so we still
> need to do a full flush.
> 
> In the case of ARMv8, it is possible to do a flush by IPA with the following
> sequence (see D4-1739 in ARM DDI 0487A.i):
>  tlbi ipa2e1, xt
>  dsb
>  tlbi vmalle1
> 
> So I was wondering if we could leave that for a future optimization.

We can leave it for now but I have an hunch that it is going to have a
pretty significant impact.

 
> > > +        if ( !(mask & ((1UL << FIRST_ORDER) - 1)) )
> > > +            order = FIRST_ORDER;
> > > +        else if ( !(mask & ((1UL << SECOND_ORDER) - 1)) )
> > > +            order = SECOND_ORDER;
> > > +        else
> > > +            order = THIRD_ORDER;
> > > +
> > > +        rc = __p2m_set_entry(p2m, sgfn, order, smfn, t, a);
> > > +        if ( rc )
> > > +            break;
> > 
> > Shouldn't we be checking that "(1<<order)" doesn't exceed "todo" before
> > calling __p2m_set_entry? Otherwise we risk creating a mapping bigger
> > than requested.
> 
> The mask is defined as gfn_x(sgfn) | mfn_x(smfn) | todo
> 
> So we will never have the order exceeding "todo".

Ah, I see. But this way we are not able to do superpage mappings for
regions larger than 2MB but not multiple of 2MB (3MB for example). But I
don't think we were able to do it before either so it is not a
requirement for the patch.


> > 
> > > +        sgfn = gfn_add(sgfn, (1 << order));
> > > +        if ( !mfn_eq(smfn, INVALID_MFN) )
> > > +           smfn = mfn_add(smfn, (1 << order));
> > > +
> > > +        todo -= (1 << order);
> > > +    }
> > > +
> > > +    return rc;
> > > +}
> > > +#endif

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.