[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 1/2] x86/mem-sharing: Bulk mem-sharing entire domains



>>> On 15.10.15 at 20:09, <tamas@xxxxxxxxxxxxx> wrote:
> --- a/xen/arch/x86/mm/mem_sharing.c
> +++ b/xen/arch/x86/mm/mem_sharing.c
> @@ -1293,6 +1293,42 @@ int relinquish_shared_pages(struct domain *d)
>      return rc;
>  }
>  
> +static int bulk_share(struct domain *d, struct domain *cd, unsigned long max,
> +                      struct mem_sharing_op_bulk *bulk)
> +{
> +    int rc = 0;
> +    shr_handle_t sh, ch;
> +
> +    while( bulk->start <= max )
> +    {
> +        rc = mem_sharing_nominate_page(d, bulk->start, 0, &sh);
> +        if ( rc == -ENOMEM )
> +            break;
> +        else if ( rc )

Pointless else.

> +            goto next;
> +
> +        rc = mem_sharing_nominate_page(cd, bulk->start, 0, &ch);
> +        if ( rc == -ENOMEM )
> +            break;
> +        else if ( rc )
> +            goto next;
> +
> +        mem_sharing_share_pages(d, bulk->start, sh, cd, bulk->start, ch);
> +
> +next:

Labels indented by at least one space please.

> @@ -1467,6 +1503,69 @@ int 
> mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
>          }
>          break;
>  
> +        case XENMEM_sharing_op_bulk_share:
> +        {
> +            unsigned long max_sgfn, max_cgfn;
> +            struct domain *cd;
> +
> +            rc = -EINVAL;
> +            if ( !mem_sharing_enabled(d) )
> +                goto out;
> +
> +            rc = rcu_lock_live_remote_domain_by_id(mso.u.bulk.client_domain,
> +                                                   &cd);
> +            if ( rc )
> +                goto out;
> +
> +            rc = xsm_mem_sharing_op(XSM_DM_PRIV, d, cd, mso.op);
> +            if ( rc )
> +            {
> +                rcu_unlock_domain(cd);
> +                goto out;
> +            }
> +
> +            if ( !mem_sharing_enabled(cd) )
> +            {
> +                rcu_unlock_domain(cd);
> +                rc = -EINVAL;
> +                goto out;
> +            }
> +
> +            if ( !atomic_read(&d->pause_count) ||
> +                 !atomic_read(&cd->pause_count) )

As said in the reply to your respective inquiry - you want to look at
controller_pause_count here instead.

> +            {
> +                rcu_unlock_domain(cd);

Considering how many times you do this, perhaps worth putting
a label at the (current) success path's unlock?

> +                rc = -EINVAL;
> +                goto out;
> +            }
> +
> +            max_sgfn = domain_get_maximum_gpfn(d);
> +            max_cgfn = domain_get_maximum_gpfn(cd);
> +
> +            if ( max_sgfn != max_cgfn || max_sgfn < mso.u.bulk.start )
> +            {
> +                rcu_unlock_domain(cd);
> +                rc = -EINVAL;
> +                goto out;
> +            }
> +
> +            rc = bulk_share(d, cd, max_sgfn, &mso.u.bulk);
> +            if ( rc > 0 )
> +            {
> +                if ( __copy_to_guest(arg, &mso, 1) )

Wouldn't copying just the one field you care about suffice here?

> +                    rc = -EFAULT;
> +                else
> +                    rc = 
> hypercall_create_continuation(__HYPERVISOR_memory_op,
> +                                                       "lh", 
> XENMEM_sharing_op,
> +                                                       arg);
> +            } else {

Coding style.

> +                mso.u.bulk.applied = atomic_read(&d->shr_pages);

Is there no possibility for something else to also modify d->shr_pages
in parallel, misguiding the caller when looking at the output field. Also
you're not copying this back to guest memory...

> @@ -482,7 +483,16 @@ struct xen_mem_sharing_op {
>              uint64_aligned_t client_gfn;    /* IN: the client gfn */
>              uint64_aligned_t client_handle; /* IN: handle to the client page 
> */
>              domid_t  client_domain; /* IN: the client domain id */
> -        } share; 
> +        } share;
> +        struct mem_sharing_op_bulk {         /* OP_BULK_SHARE */
> +            domid_t client_domain;           /* IN: the client domain id */

Explicit padding here please (albeit I see already from the context
that this isn't being done in e.g. the share sub-structure above
either).

Jan

> +            uint64_aligned_t start;          /* IN: start gfn. Set to 0 for
> +                                                full deduplication. Field is
> +                                                used internally and may 
> change
> +                                                when the hypercall returns. 
> */
> +            uint64_aligned_t applied;        /* OUT: the number of gfns
> +                                                where the operation applied 
> */
> +        } bulk;


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.