[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH for-4.5 v8 15/19] xen/arm: Data abort exception (R/W) mem_events.



Hello Tamas,

On 09/23/2014 02:14 PM, Tamas K Lengyel wrote:
> This patch enables to store, set, check and deliver LPAE R/W mem_events.
> As the LPAE PTE's lack enough available software programmable bits,
> we store the permissions in a Radix tree. A custom boolean, access_in_use,
> specifies if the tree is in use to avoid uneccessary lookups on an empty tree.

unecessary

[..]

> +static long p2m_mem_access_radix_set(struct p2m_domain *p2m, unsigned long 
> pfn,

Shouldn't "int" enough for the return type?

> +                                     p2m_access_t a)
> +{
> +    long rc;

NIT: missing new line here.

[..]

>  /* Put any references on the single 4K page referenced by pte.  TODO:
> @@ -553,13 +584,22 @@ static int apply_one_level(struct domain *d,
>          if ( p2m_valid(orig_pte) )
>              return P2M_ONE_DESCEND;
>  
> -        if ( is_mapping_aligned(*addr, end_gpaddr, 0, level_size) )
> +        if ( is_mapping_aligned(*addr, end_gpaddr, 0, level_size) &&
> +           /* We only create superpages when mem_access is not in use. */
> +             (level == 3 || (level < 3 && !p2m->access_in_use)) )

Can't this check be moved in is_mapping_aligned? You have nearly the
same few lines below.

[..]

> +    case MEMACCESS:
> +        if ( level < 3 )
> +        {
> +            if ( !p2m_valid(orig_pte) )
> +            {
> +                *addr += level_size;
> +                return P2M_ONE_PROGRESS_NOP;
> +            }
> +
> +            /* Shatter large pages as we descend */
> +            if ( p2m_mapping(orig_pte) )
> +            {
> +                rc = p2m_shatter_page(d, entry, level, flush_cache);
> +
> +                if ( rc < 0 )
> +                    return rc;
> +            } /* else: an existing table mapping -> descend */
> +
> +            return P2M_ONE_DESCEND;
> +        }
> +        else
> +        {
> +            pte = orig_pte;
> +
> +            if ( !p2m_table(pte) )
> +                pte.bits = 0;
> +
> +            if ( p2m_valid(pte) )
> +            {
> +                ASSERT(pte.p2m.type != p2m_invalid);

Why the ASSERT? I don't see why we wouldn't want to set permission for
this type of page.

[..]

> @@ -821,6 +912,21 @@ static int apply_p2m_changes(struct domain *d,
>              count = 0;
>          }
>  
> +        /*
> +         * Preempt setting mem_access permissions as required by XSA-89,
> +         * if it's not the last iteration.
> +         */
> +        if ( op == MEMACCESS && count )
> +        {
> +            int progress = paddr_to_pfn(addr) - start_gpfn + 1;

uint32_t?


NIT: Missing blank line.

> +            if ( (end_gpfn-start_gpfn) > progress && !(progress & mask)

NIT: (end_gpfn - start_gpfn)

Also you are comparing with an "int" with an "unsigned long". I'm not
sure what could happen in the compiler (implicit cast, sign extension...)

> +                 && hypercall_preempt_check() )
> +            {
> +                rc = progress;
> +                goto out;

Jumping directly to the label "out" will skip flushing the TLB for the
domain. While it wasn't critical until now, partial redo during
insertion/allocation or hypercall preemption only for relinquish, the
guest may use the wrong permission because the TLB hasn't been flushed.

At the same time, it looks like you never request to flush for the
MEMACCESS operation (see *flush = true). Does memaccess does a TLB flush
somewhere else?

[..]

> +bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec 
> npfec)
> +{
> +    int rc;
> +    bool_t violation;
> +    xenmem_access_t xma;
> +    mem_event_request_t *req;
> +    struct vcpu *v = current;
> +    struct p2m_domain *p2m = p2m_get_hostp2m(v->domain);
> +
> +    /* Mem_access is not in use. */
> +    if ( !p2m->access_in_use )
> +        return true;

AFAIU, it's not possible to call this function when mem access is not in
use. I would turn this check into an ASSERT.


[..]

> +    if ( !violation )
> +        return true;
> +
> +    /* First, handle rx2rw and n2rwx conversion automatically. */
> +    if ( npfec.write_access && xma == XENMEM_access_rx2rw )
> +    {
> +        rc = p2m_set_mem_access(v->domain, paddr_to_pfn(gpa), 1,
> +                                0, ~0, XENMEM_access_rw);
> +        return false;
> +    }
> +    else if ( xma == XENMEM_access_n2rwx )
> +    {
> +        rc = p2m_set_mem_access(v->domain, paddr_to_pfn(gpa), 1,
> +                                0, ~0, XENMEM_access_rwx);
> +    }
> +
> +    /* Otherwise, check if there is a memory event listener, and send the 
> message along */
> +    if ( !mem_event_check_ring( &v->domain->mem_event->access ) )

NIT: if ( !mem_event_check_ring(&v->domain->mem_event->access) )

> +    {
> +        /* No listener */
> +        if ( p2m->access_required )
> +        {
> +            gdprintk(XENLOG_INFO, "Memory access permissions failure, "
> +                                  "no mem_event listener VCPU %d, dom %d\n",
> +                                  v->vcpu_id, v->domain->domain_id);
> +            domain_crash(v->domain);
> +        }
> +        else
> +        {
> +            /* n2rwx was already handled */
> +            if ( xma != XENMEM_access_n2rwx)

NIT: if ( ... )

[..]

> +/* Set access type for a region of pfns.
> + * If start_pfn == -1ul, sets the default access type */
> +long p2m_set_mem_access(struct domain *d, unsigned long pfn, uint32_t nr,
> +                        uint32_t start, uint32_t mask, xenmem_access_t 
> access)
> +{

[..]

> +    rc = apply_p2m_changes(d, MEMACCESS,
> +                           pfn_to_paddr(pfn+start), pfn_to_paddr(pfn+nr),
> +                           0, MATTR_MEM, mask, 0, a);
> +
> +    if ( rc < 0 )
> +        return rc;
> +    else if ( rc > 0 )
> +        return start+rc;

start + rc

> +
> +    flush_tlb_domain(d);

NIT: Missing blank line.

Regards,


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.