WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH 3 of 9] Enforce ordering constraints for the page

To: "Tim Deegan" <tim@xxxxxxx>
Subject: Re: [Xen-devel] [PATCH 3 of 9] Enforce ordering constraints for the page alloc lock in the PoD code
From: andres@xxxxxxxxxxxxxxxx
Date: Wed, 2 Nov 2011 06:59:04 -0700
Cc: olaf@xxxxxxxxx, xen-devel@xxxxxxxxxxxxxxxxxxx, andres@xxxxxxxxxxxxxx, keir.xen@xxxxxxxxx, Andres Lagar-Cavilla <andres@xxxxxxxxxxxxxxxx>, adin@xxxxxxxxxxxxxx
Delivery-date: Wed, 02 Nov 2011 07:03:29 -0700
Dkim-signature: v=1; a=rsa-sha1; c=relaxed; d=lagarcavilla.com; h= message-id:in-reply-to:references:date:subject:from:to:cc :mime-version:content-type:content-transfer-encoding; s= lagarcavilla.com; bh=5uvJj4hImQkmDhIstzkF117CGFA=; b=j5gK+jfcZp1 31MhyEjcBIDAdXZaV1zhFRNw9Jscy1preGs1yAtU/MDE6sr49zwSNgsKvsthzE7E OqppGtMJoTpAyd4RI4aXr1QBMZcQXEAfVi3HP6A870TK6vbk+JewKuuDeCKvestR XBmsFo2KtwUt3AXLqs3ZDmSjgGeq9Kws=
Domainkey-signature: a=rsa-sha1; c=nofws; d=lagarcavilla.com; h=message-id :in-reply-to:references:date:subject:from:to:cc:mime-version :content-type:content-transfer-encoding; q=dns; s= lagarcavilla.com; b=IFA+ggc22kMZt7vrc7EFAawzA6zD0BfO0b3MiEa4kA8J sbd0gYipZ57WNyrOCkgABGrnYeYEiktZVZrx9lOd/Xeyyc8xRCYkXjsZpJba+9Qg cuJ4lndvjkXIT/TYOWtPhdEn3ryJmfIS4LEPguiTYvDFU8D1h3EPYYsy//mu7tw=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20111027133622.GJ59656@xxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <patchbomb.1319690025@xxxxxxxxxxxxxxxxxxx> <332775f72a3055c47c2b.1319690028@xxxxxxxxxxxxxxxxxxx> <20111027133622.GJ59656@xxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: SquirrelMail/1.4.21
Hi also,
> Hi,
>
> The intent of these first three patches looks good to me, but:
>
> - I think it would be better to generate generic spin-lock-with-level
>   and unlock-with-level wrapper functions rather than generating the
>   various checks and having to assemble them into lock_page_alloc() and
>   unlock_page_alloc() by hand.

The final intent is to have these macros establish ordering constraints
for the fine-grained p2m lock, which is not only "grab a spinlock".
Granted, we do not know yet whether we'll need such a fine-grained
approach, but I think it's worth keeping things separate.

As a side-note, an earlier version of my patches did enforce ordering,
except things got really hairy with mem_sharing_unshare_page (which would
jump levels up to grab shr_lock) and pod sweeps. I (think I) have
solutions for both, but I'm not ready to push those yet.

> - p2m->pod.page_alloc_unlock_level is wrong, I think; I can see that you
>   need somewhere to store the unlock-level but it shouldn't live in
>   the p2m state - it's at most a per-domain variable, so it should
>   live in the struct domain; might as well be beside the lock itself.

Ok, sure. Although I think I need to make clear that this ordering
constraint only applies within the pod code, and that's why I wanted to
keep the book-keeping within the pod struct.

Andres
> Tim.
>
> At 00:33 -0400 on 27 Oct (1319675628), Andres Lagar-Cavilla wrote:
>>  xen/arch/x86/mm/mm-locks.h |  11 +++++++++++
>>  xen/arch/x86/mm/p2m-pod.c  |  40
>> +++++++++++++++++++++++++++-------------
>>  xen/include/asm-x86/p2m.h  |   5 +++++
>>  3 files changed, 43 insertions(+), 13 deletions(-)
>>
>>
>> The page alloc lock is sometimes used in the PoD code, with an
>> explicit expectation of ordering. Use our ordering constructs in the
>> mm layer to enforce this.
>>
>> Signed-off-by: Andres Lagar-Cavilla <andres@xxxxxxxxxxxxxxxx>
>>
>> diff -r c915609e4235 -r 332775f72a30 xen/arch/x86/mm/mm-locks.h
>> --- a/xen/arch/x86/mm/mm-locks.h
>> +++ b/xen/arch/x86/mm/mm-locks.h
>> @@ -155,6 +155,17 @@ declare_mm_lock(p2m)
>>  #define p2m_unlock(p)         mm_unlock(&(p)->lock)
>>  #define p2m_locked_by_me(p)   mm_locked_by_me(&(p)->lock)
>>
>> +/* Page alloc lock (per-domain)
>> + *
>> + * This is an external lock, not represented by an mm_lock_t. However,
>> + * pod code uses it in conjunction with the p2m lock, and expecting
>> + * the ordering which we enforce here */
>> +
>> +declare_mm_order_constraint(page_alloc)
>> +#define page_alloc_mm_pre_lock()
>> mm_enforce_order_lock_pre_page_alloc()
>> +#define page_alloc_mm_post_lock(l)
>> mm_enforce_order_lock_post_page_alloc(&(l))
>> +#define page_alloc_mm_unlock(l)    mm_enforce_order_unlock((l))
>> +
>>  /* Paging lock (per-domain)
>>   *
>>   * For shadow pagetables, this lock protects
>> diff -r c915609e4235 -r 332775f72a30 xen/arch/x86/mm/p2m-pod.c
>> --- a/xen/arch/x86/mm/p2m-pod.c
>> +++ b/xen/arch/x86/mm/p2m-pod.c
>> @@ -45,6 +45,20 @@
>>
>>  #define superpage_aligned(_x)  (((_x)&(SUPERPAGE_PAGES-1))==0)
>>
>> +/* Enforce lock ordering when grabbing the "external" page_alloc lock
>> */
>> +static inline void lock_page_alloc(struct p2m_domain *p2m)
>> +{
>> +    page_alloc_mm_pre_lock();
>> +    spin_lock(&(p2m->domain->page_alloc_lock));
>> +    page_alloc_mm_post_lock(p2m->pod.page_alloc_unlock_level);
>> +}
>> +
>> +static inline void unlock_page_alloc(struct p2m_domain *p2m)
>> +{
>> +    page_alloc_mm_unlock(p2m->pod.page_alloc_unlock_level);
>> +    spin_unlock(&(p2m->domain->page_alloc_lock));
>> +}
>> +
>>  /*
>>   * Populate-on-demand functionality
>>   */
>> @@ -100,7 +114,7 @@ p2m_pod_cache_add(struct p2m_domain *p2m
>>          unmap_domain_page(b);
>>      }
>>
>> -    spin_lock(&d->page_alloc_lock);
>> +    lock_page_alloc(p2m);
>>
>>      /* First, take all pages off the domain list */
>>      for(i=0; i < 1 << order ; i++)
>> @@ -128,7 +142,7 @@ p2m_pod_cache_add(struct p2m_domain *p2m
>>       * This may cause "zombie domains" since the page will never be
>> freed. */
>>      BUG_ON( d->arch.relmem != RELMEM_not_started );
>>
>> -    spin_unlock(&d->page_alloc_lock);
>> +    unlock_page_alloc(p2m);
>>
>>      return 0;
>>  }
>> @@ -245,7 +259,7 @@ p2m_pod_set_cache_target(struct p2m_doma
>>
>>          /* Grab the lock before checking that pod.super is empty, or
>> the last
>>           * entries may disappear before we grab the lock. */
>> -        spin_lock(&d->page_alloc_lock);
>> +        lock_page_alloc(p2m);
>>
>>          if ( (p2m->pod.count - pod_target) > SUPERPAGE_PAGES
>>               && !page_list_empty(&p2m->pod.super) )
>> @@ -257,7 +271,7 @@ p2m_pod_set_cache_target(struct p2m_doma
>>
>>          ASSERT(page != NULL);
>>
>> -        spin_unlock(&d->page_alloc_lock);
>> +        unlock_page_alloc(p2m);
>>
>>          /* Then free them */
>>          for ( i = 0 ; i < (1 << order) ; i++ )
>> @@ -378,7 +392,7 @@ p2m_pod_empty_cache(struct domain *d)
>>      BUG_ON(!d->is_dying);
>>      spin_barrier(&p2m->lock.lock);
>>
>> -    spin_lock(&d->page_alloc_lock);
>> +    lock_page_alloc(p2m);
>>
>>      while ( (page = page_list_remove_head(&p2m->pod.super)) )
>>      {
>> @@ -403,7 +417,7 @@ p2m_pod_empty_cache(struct domain *d)
>>
>>      BUG_ON(p2m->pod.count != 0);
>>
>> -    spin_unlock(&d->page_alloc_lock);
>> +    unlock_page_alloc(p2m);
>>  }
>>
>>  int
>> @@ -417,7 +431,7 @@ p2m_pod_offline_or_broken_hit(struct pag
>>      if ( !(d = page_get_owner(p)) || !(p2m = p2m_get_hostp2m(d)) )
>>          return 0;
>>
>> -    spin_lock(&d->page_alloc_lock);
>> +    lock_page_alloc(p2m);
>>      bmfn = mfn_x(page_to_mfn(p));
>>      page_list_for_each_safe(q, tmp, &p2m->pod.super)
>>      {
>> @@ -448,12 +462,12 @@ p2m_pod_offline_or_broken_hit(struct pag
>>          }
>>      }
>>
>> -    spin_unlock(&d->page_alloc_lock);
>> +    unlock_page_alloc(p2m);
>>      return 0;
>>
>>  pod_hit:
>>      page_list_add_tail(p, &d->arch.relmem_list);
>> -    spin_unlock(&d->page_alloc_lock);
>> +    unlock_page_alloc(p2m);
>>      return 1;
>>  }
>>
>> @@ -994,7 +1008,7 @@ p2m_pod_demand_populate(struct p2m_domai
>>      if ( q == p2m_guest && gfn > p2m->pod.max_guest )
>>          p2m->pod.max_guest = gfn;
>>
>> -    spin_lock(&d->page_alloc_lock);
>> +    lock_page_alloc(p2m);
>>
>>      if ( p2m->pod.count == 0 )
>>          goto out_of_memory;
>> @@ -1008,7 +1022,7 @@ p2m_pod_demand_populate(struct p2m_domai
>>
>>      BUG_ON((mfn_x(mfn) & ((1 << order)-1)) != 0);
>>
>> -    spin_unlock(&d->page_alloc_lock);
>> +    unlock_page_alloc(p2m);
>>
>>      gfn_aligned = (gfn >> order) << order;
>>
>> @@ -1040,7 +1054,7 @@ p2m_pod_demand_populate(struct p2m_domai
>>
>>      return 0;
>>  out_of_memory:
>> -    spin_unlock(&d->page_alloc_lock);
>> +    unlock_page_alloc(p2m);
>>
>>      printk("%s: Out of populate-on-demand memory! tot_pages %" PRIu32 "
>> pod_entries %" PRIi32 "\n",
>>             __func__, d->tot_pages, p2m->pod.entry_count);
>> @@ -1049,7 +1063,7 @@ out_fail:
>>      return -1;
>>  remap_and_retry:
>>      BUG_ON(order != PAGE_ORDER_2M);
>> -    spin_unlock(&d->page_alloc_lock);
>> +    unlock_page_alloc(p2m);
>>
>>      /* Remap this 2-meg region in singleton chunks */
>>      gfn_aligned = (gfn>>order)<<order;
>> diff -r c915609e4235 -r 332775f72a30 xen/include/asm-x86/p2m.h
>> --- a/xen/include/asm-x86/p2m.h
>> +++ b/xen/include/asm-x86/p2m.h
>> @@ -270,6 +270,10 @@ struct p2m_domain {
>>       * + p2m_pod_demand_populate() grabs both; the p2m lock to avoid
>>       *   double-demand-populating of pages, the page_alloc lock to
>>       *   protect moving stuff from the PoD cache to the domain page
>> list.
>> +     *
>> +     * We enforce this lock ordering through a construct in mm-locks.h.
>> +     * This demands, however, that we store the previous lock-ordering
>> +     * level in effect before grabbing the page_alloc lock.
>>       */
>>      struct {
>>          struct page_list_head super,   /* List of superpages
>>     */
>> @@ -279,6 +283,7 @@ struct p2m_domain {
>>          unsigned         reclaim_super; /* Last gpfn of a scan */
>>          unsigned         reclaim_single; /* Last gpfn of a scan */
>>          unsigned         max_guest;    /* gpfn of max guest
>> demand-populate */
>> +        int              page_alloc_unlock_level; /* To enforce lock
>> ordering */
>>      } pod;
>>  };
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/xen-devel
>



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>