[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 05/13] x86/altp2m: basic data structures and support routines.



On 07/01/2015 07:09 PM, Ed White wrote:
> diff --git a/xen/arch/x86/mm/mm-locks.h b/xen/arch/x86/mm/mm-locks.h
> index b4f035e..301ca59 100644
> --- a/xen/arch/x86/mm/mm-locks.h
> +++ b/xen/arch/x86/mm/mm-locks.h
> @@ -217,7 +217,7 @@ declare_mm_lock(nestedp2m)
>  #define nestedp2m_lock(d)   mm_lock(nestedp2m, &(d)->arch.nested_p2m_lock)
>  #define nestedp2m_unlock(d) mm_unlock(&(d)->arch.nested_p2m_lock)
>  
> -/* P2M lock (per-p2m-table)
> +/* P2M lock (per-non-alt-p2m-table)
>   *
>   * This protects all queries and updates to the p2m table.
>   * Queries may be made under the read lock but all modifications
> @@ -225,10 +225,44 @@ declare_mm_lock(nestedp2m)
>   *
>   * The write lock is recursive as it is common for a code path to look
>   * up a gfn and later mutate it.
> + *
> + * Note that this lock shares its implementation with the altp2m
> + * lock (not the altp2m list lock), so the implementation
> + * is found there.
>   */
>  
>  declare_mm_rwlock(p2m);
> -#define p2m_lock(p)           mm_write_lock(p2m, &(p)->lock);
> +
> +/* Alternate P2M list lock (per-domain)
> + *
> + * A per-domain lock that protects the list of alternate p2m's.
> + * Any operation that walks the list needs to acquire this lock.
> + * Additionally, before destroying an alternate p2m all VCPU's
> + * in the target domain must be paused.
> + */
> +
> +declare_mm_lock(altp2mlist)
> +#define altp2m_lock(d)   mm_lock(altp2mlist, &(d)->arch.altp2m_lock)
> +#define altp2m_unlock(d) mm_unlock(&(d)->arch.altp2m_lock)
> +
> +/* P2M lock (per-altp2m-table)
> + *
> + * This protects all queries and updates to the p2m table.
> + * Queries may be made under the read lock but all modifications
> + * need the main (write) lock.
> + *
> + * The write lock is recursive as it is common for a code path to look
> + * up a gfn and later mutate it.
> + */
> +
> +declare_mm_rwlock(altp2m);
> +#define p2m_lock(p)                         \
> +{                                           \
> +    if ( p2m_is_altp2m(p) )                 \
> +        mm_write_lock(altp2m, &(p)->lock);  \
> +    else                                    \
> +        mm_write_lock(p2m, &(p)->lock);     \
> +}
>  #define p2m_unlock(p)         mm_write_unlock(&(p)->lock);
>  #define gfn_lock(p,g,o)       p2m_lock(p)
>  #define gfn_unlock(p,g,o)     p2m_unlock(p)

I've just taken on the role of mm maintainer, and so this late in a
series, having Tim's approval and also having Andy's reviewed-by, I'd
normally just skim through and Ack it as a matter of course.  But I just
wouldn't feel right giving this my maintainer's ack without
understanding the locking a bit better.  So please bear with me here.

I see in the cover letter that you "sandwiched" the altp2mlist lock
between p2m and altp2m at Tim's suggestion.  But I can't find the
discussion where that was suggested (it doesn't seem to be in response
to v1 patch 5), and it's not immediately obvious to me, either from the
code or your comments, what that's for.  Can you point me to the
discussion, and/or give me a better description as to why it's important
to be able to grab the p2m lock before the altp2mlist lock, but the
altp2m lock afterwards?

Thanks,
 -George


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.