[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/2] x86/mem-sharing: Bulk mem-sharing entire domains



> @@ -1467,6 +1503,56 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
>Â Â Â Â Â }
>Â Â Â Â Â break;
>
> +Â Â Â Â case XENMEM_sharing_op_bulk_dedup:
> +Â Â Â Â {
> +Â Â Â Â Â Â unsigned long max_sgfn, max_cgfn;
> +Â Â Â Â Â Â struct domain *cd;
> +
> +Â Â Â Â Â Â rc = -EINVAL;
> +Â Â Â Â Â Â if ( !mem_sharing_enabled(d) )
> +Â Â Â Â Â Â Â Â goto out;
> +
> +Â Â Â Â Â Â rc = rcu_lock_live_remote_domain_by_id(mso.u.share.client_domain,
> +Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â&cd);
> +Â Â Â Â Â Â if ( rc )
> +Â Â Â Â Â Â Â Â goto out;
> +
> +Â Â Â Â Â Â rc = xsm_mem_sharing_op(XSM_DM_PRIV, d, cd, mso.op);
> +Â Â Â Â Â Â if ( rc )
> +Â Â Â Â Â Â {
> +Â Â Â Â Â Â Â Â rcu_unlock_domain(cd);
> +Â Â Â Â Â Â Â Â goto out;
> +Â Â Â Â Â Â }
> +
> +Â Â Â Â Â Â if ( !mem_sharing_enabled(cd) )
> +Â Â Â Â Â Â {
> +Â Â Â Â Â Â Â Â rcu_unlock_domain(cd);
> +Â Â Â Â Â Â Â Â rc = -EINVAL;
> +Â Â Â Â Â Â Â Â goto out;
> +Â Â Â Â Â Â }
> +
> +Â Â Â Â Â Â max_sgfn = domain_get_maximum_gpfn(d);
> +Â Â Â Â Â Â max_cgfn = domain_get_maximum_gpfn(cd);
> +
> +Â Â Â Â Â Â if ( max_sgfn != max_cgfn || max_sgfn < start_iter )
> +Â Â Â Â Â Â {
> +Â Â Â Â Â Â Â Â rcu_unlock_domain(cd);
> +Â Â Â Â Â Â Â Â rc = -EINVAL;
> +Â Â Â Â Â Â Â Â goto out;
> +Â Â Â Â Â Â }
> +
> +Â Â Â Â Â Â rc = bulk_share(d, cd, max_sgfn, start_iter, MEMOP_CMD_MASK);
> +Â Â Â Â Â Â if ( rc > 0 )
> +Â Â Â Â Â Â {
> +Â Â Â Â Â Â Â Â ASSERT(!(rc & MEMOP_CMD_MASK));

The way other continuations like this work is to shift the remaining
work left by MEMOP_EXTENT_SHIFT.

This avoids bulk_share() needing to know MEMOP_CMD_MASK, but does chop 6
bits off the available max_sgfn.

However, a better alternative would be to extend xen_mem_sharing_op and
stash the continue information in a new union. That would avoid the
mask games, and also avoid limiting the maximum potential gfn.

~Andrew

I agree, I was thinking of extending it anyway to return the number of pages that was shared, so this could be looped in there too.

Thanks,
Tamas
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.