[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 06/12] xenpaging: update machine_to_phys_mapping[] during page deallocation


  • To: Olaf Hering <olaf@xxxxxxxxx>
  • From: Keir Fraser <keir@xxxxxxx>
  • Date: Tue, 11 Jan 2011 11:29:23 +0000
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Tue, 11 Jan 2011 03:59:48 -0800
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=sender:user-agent:date:subject:from:to:cc:message-id:thread-topic :thread-index:in-reply-to:mime-version:content-type :content-transfer-encoding; b=NzPxOWUXzdLjSm5Dsy9AlHbbrjoLPyXy6sWaYD2QXdqAOSj+7AjlA4Q/iq0nvPJqoy YJch10RKZYj59hnsbkCZtULBa9Grxk8YluEcM8GeoUd6cjC4MfgTgsUf4XVQJEAkazkv U+vQ/Q2VI3UtFk362OKHE1SIxdx5OAERT8mNE=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AcuxgsubxBwwbI8syEW44VZXPTY2xQ==
  • Thread-topic: [Xen-devel] [PATCH 06/12] xenpaging: update machine_to_phys_mapping[] during page deallocation

On 11/01/2011 11:00, "Olaf Hering" <olaf@xxxxxxxxx> wrote:

> On Tue, Jan 11, Keir Fraser wrote:
> 
>> Could we do this in free_heap_pages() instead? That definitely catches
>> everything that gets placed in Xen's free pool.
> 
> Yes, this is a compile-tested patch.

Thanks. I fixed a subtle bug (rather disgustingly, set_gpfn_from_mfn()
depends on page_get_owner(mfn_to_page(mfn))), and checked it in as c/s
22706.

 -- Keir

> 
> The machine_to_phys_mapping[] array needs updating during page
> deallocation.  If that page is allocated again, a call to
> get_gpfn_from_mfn() will still return an old gfn from another guest.
> This will cause trouble because this gfn number has no or different
> meaning in the context of the current guest.
> 
> This happens when the entire guest ram is paged-out before
> xen_vga_populate_vram() runs.  Then XENMEM_populate_physmap is called
> with gfn 0xff000.  A new page is allocated with alloc_domheap_pages.
> This new page does not have a gfn yet.  However, in
> guest_physmap_add_entry() the passed mfn maps still to an old gfn
> (perhaps from another old guest).  This old gfn is in paged-out state in
> this guests context and has no mfn anymore.  As a result, the ASSERT()
> triggers because p2m_is_ram() is true for p2m_ram_paging* types.
> If the machine_to_phys_mapping[] array is updated properly, both loops
> in guest_physmap_add_entry() turn into no-ops for the new page and the
> mfn/gfn mapping will be done at the end of the function.
> 
> If XENMEM_add_to_physmap is used with XENMAPSPACE_gmfn,
> get_gpfn_from_mfn() will return an appearently valid gfn.  As a result,
> guest_physmap_remove_page() is called.  The ASSERT in p2m_remove_page
> triggers because the passed mfn does not match the old mfn for the
> passed gfn.
> 
> 
> Signed-off-by: Olaf Hering <olaf@xxxxxxxxx>
> 
> ---
> v2:
>     move from free_domheap_pages() to free_heap_pages() as suggested by Keir
> 
>  xen/common/page_alloc.c |    6 ++++++
>  1 file changed, 6 insertions(+)
> 
> --- xen-unstable.hg-4.1.22697.orig/xen/common/page_alloc.c
> +++ xen-unstable.hg-4.1.22697/xen/common/page_alloc.c
> @@ -527,6 +527,7 @@ static int reserve_offlined_page(struct
>  static void free_heap_pages(
>      struct page_info *pg, unsigned int order)
>  {
> +    unsigned long mfn;
>      unsigned long mask;
>      unsigned int i, node = phys_to_nid(page_to_maddr(pg)), tainted = 0;
>      unsigned int zone = page_to_zone(pg);
> @@ -536,6 +537,11 @@ static void free_heap_pages(
>  
>      spin_lock(&heap_lock);
>  
> +    /* this page is not a gfn anymore */
> +    mfn = page_to_mfn(pg);
> +    for ( i = 0; i < (1 << order); i++ )
> +        set_gpfn_from_mfn(mfn + i, INVALID_M2P_ENTRY);
> +
>      for ( i = 0; i < (1 << order); i++ )
>      {
>          /*



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.