WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH 8 of 9] Modify all internal p2m functions to use

To: Andres Lagar-Cavilla <andres@xxxxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH 8 of 9] Modify all internal p2m functions to use the new fine-grained locking
From: Tim Deegan <tim@xxxxxxx>
Date: Thu, 27 Oct 2011 15:57:11 +0100
Cc: andres@xxxxxxxxxxxxxx, xen-devel@xxxxxxxxxxxxxxxxxxx, olaf@xxxxxxxxx, keir.xen@xxxxxxxxx, adin@xxxxxxxxxxxxxx
Delivery-date: Thu, 27 Oct 2011 07:58:39 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <471d4f2754d6516d5926.1319690033@xxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <patchbomb.1319690025@xxxxxxxxxxxxxxxxxxx> <471d4f2754d6516d5926.1319690033@xxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.4.2.1i
Hi, 

At 00:33 -0400 on 27 Oct (1319675633), Andres Lagar-Cavilla wrote:
> This patch only modifies code internal to the p2m, adding convenience
> macros, etc. It will yield a compiling code base but an incorrect
> hypervisor (external callers of queries into the p2m will not unlock).
> Next patch takes care of external callers, split done for the benefit
> of conciseness.

Better to do it the other way round: put the enormous change-all-callers
patch first, with noop unlock functions, and then hook in the unlocks.
That way you won't cause chaos when people try to bisect to find when a
bug was introduced. 

> diff -r 8a98179666de -r 471d4f2754d6 xen/include/asm-x86/p2m.h
> --- a/xen/include/asm-x86/p2m.h
> +++ b/xen/include/asm-x86/p2m.h
> @@ -220,7 +220,7 @@ struct p2m_domain {
>       * tables on every host-p2m change.  The setter of this flag 
>       * is responsible for performing the full flush before releasing the
>       * host p2m's lock. */
> -    int                defer_nested_flush;
> +    atomic_t           defer_nested_flush;
>  
>      /* Pages used to construct the p2m */
>      struct page_list_head pages;
> @@ -298,6 +298,15 @@ struct p2m_domain *p2m_get_p2m(struct vc
>  #define p2m_get_pagetable(p2m)  ((p2m)->phys_table)
>  
>  
> +/* No matter what value you get out of a query, the p2m has been locked for
> + * that range. No matter what you do, you need to drop those locks.
> + * You need to pass back the mfn obtained when locking, not the new one,
> + * as the refcount of the original mfn was bumped. */

Surely the caller doesn't need to remember the old MFN for this?  After
allm, the whole point of the lock was that nobody else could change the
p2m entry under our feet!

In any case, I thing there needs to be a big block comment a bit futher
up that describes what all this locking and refcounting does, and why. 

> +void drop_p2m_gfn(struct p2m_domain *p2m, unsigned long gfn, 
> +                        unsigned long mfn);
> +#define drop_p2m_gfn_domain(d, g, m)    \
> +        drop_p2m_gfn(p2m_get_hostp2m((d)), (g), (m))
> +
>  /* Read a particular P2M table, mapping pages as we go.  Most callers
>   * should _not_ call this directly; use the other gfn_to_mfn_* functions
>   * below unless you know you want to walk a p2m that isn't a domain's
> @@ -327,6 +336,28 @@ static inline mfn_t gfn_to_mfn_type(stru
>  #define gfn_to_mfn_guest(d, g, t)   gfn_to_mfn_type((d), (g), (t), p2m_guest)
>  #define gfn_to_mfn_unshare(d, g, t) gfn_to_mfn_type((d), (g), (t), 
> p2m_unshare)
>  
> +/* This one applies to very specific situations in which you're querying
> + * a p2m entry and will be done "immediately" (such as a printk or computing 
> a 
> + * return value). Use this only if there are no expectations of the p2m entry
> + * holding steady. */
> +static inline mfn_t gfn_to_mfn_type_unlocked(struct domain *d,
> +                                        unsigned long gfn, p2m_type_t *t,
> +                                        p2m_query_t q)
> +{
> +    mfn_t mfn = gfn_to_mfn_type(d, gfn, t, q);
> +    drop_p2m_gfn_domain(d, gfn, mfn_x(mfn));
> +    return mfn;
> +}
> +
> +#define gfn_to_mfn_unlocked(d, g, t)            \
> +    gfn_to_mfn_type_unlocked((d), (g), (t), p2m_alloc)
> +#define gfn_to_mfn_query_unlocked(d, g, t)    \
> +    gfn_to_mfn_type_unlocked((d), (g), (t), p2m_query)
> +#define gfn_to_mfn_guest_unlocked(d, g, t)    \
> +    gfn_to_mfn_type_unlocked((d), (g), (t), p2m_guest)
> +#define gfn_to_mfn_unshare_unlocked(d, g, t)    \
> +    gfn_to_mfn_type_unlocked((d), (g), (t), p2m_unshare)
> +

Ugh.  This could really benefit from having the gfn_to_mfn_* functions
take a set of flags instead of an enum.  This exponential blowup in
interface is going too far. :)

That oughtn't to stop this interface from going in, of course, but if
we're going to tinker with the p2m callers once, we should do it all
together. 

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>