WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-changelog

[Xen-changelog] [xen-unstable] page allocator: add mfn_valid() check to

To: xen-changelog@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-changelog] [xen-unstable] page allocator: add mfn_valid() check to free_heap_pages() and scrub_pages()
From: Xen patchbot-unstable <patchbot-unstable@xxxxxxxxxxxxxxxxxxx>
Date: Wed, 22 Jul 2009 06:10:25 -0700
Delivery-date: Wed, 22 Jul 2009 06:19:21 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-changelog-request@lists.xensource.com?subject=help>
List-id: BK change log <xen-changelog.lists.xensource.com>
List-post: <mailto:xen-changelog@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-changelog>, <mailto:xen-changelog-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-changelog>, <mailto:xen-changelog-request@lists.xensource.com?subject=unsubscribe>
Reply-to: xen-devel@xxxxxxxxxxxxxxxxxxx
Sender: xen-changelog-bounces@xxxxxxxxxxxxxxxxxxx
# HG changeset patch
# User Keir Fraser <keir.fraser@xxxxxxxxxx>
# Date 1248267981 -3600
# Node ID 5adc108c0085c6901e67d0b7ceebb1022c2c0ffd
# Parent  091036b8dbb9420a1bb0aaf2dc793c268371b0e9
page allocator: add mfn_valid() check to free_heap_pages() and scrub_pages()

The changesets, 19913:ef38784f9f85 and 19914:d6c1d7992f43 eliminates
boot allocator bitmap which is also used for buddy allocator bitmap.
With those patches, xen/ia64 doesn't boot because page allocator
touches struct page_info which doesn't exist.
That happends because memory is populated sparsely on ia64
and struct page_info is so.

This patches fixes ia64 boot failure.
In fact, this is also a potential bug on x86. max_page seems
to be well aligned so that MAX_ORDER loop check prevented
to be bug appear.

- fix free_heap_pages().
  When merging chunks, buddy page_info() doesn't always exists.
  So check it by mfn_valid().

- fix scrub_pages()
  On ia64 page_info() is sparsely populated, so struct page_info
  doesn't always exist. Check it by mfn_valid()

- offline_pages(), online_pages() and query_page_offline()
  Also replace "< max_page" check with mfn_valid() for consistency.

Signed-off-by: Isaku Yamahata <yamahata@xxxxxxxxxxxxx>
---
 xen/common/page_alloc.c |   14 ++++++++------
 1 files changed, 8 insertions(+), 6 deletions(-)

diff -r 091036b8dbb9 -r 5adc108c0085 xen/common/page_alloc.c
--- a/xen/common/page_alloc.c   Wed Jul 22 14:05:26 2009 +0100
+++ b/xen/common/page_alloc.c   Wed Jul 22 14:06:21 2009 +0100
@@ -507,7 +507,8 @@ static void free_heap_pages(
         if ( (page_to_mfn(pg) & mask) )
         {
             /* Merge with predecessor block? */
-            if ( !page_state_is(pg-mask, free) ||
+            if ( !mfn_valid(page_to_mfn(pg-mask)) ||
+                 !page_state_is(pg-mask, free) ||
                  (PFN_ORDER(pg-mask) != order) )
                 break;
             pg -= mask;
@@ -516,7 +517,8 @@ static void free_heap_pages(
         else
         {
             /* Merge with successor block? */
-            if ( !page_state_is(pg+mask, free) ||
+            if ( !mfn_valid(page_to_mfn(pg+mask)) ||
+                 !page_state_is(pg+mask, free) ||
                  (PFN_ORDER(pg+mask) != order) )
                 break;
             page_list_del(pg + mask, &heap(node, zone, order));
@@ -608,7 +610,7 @@ int offline_page(unsigned long mfn, int 
     int ret = 0;
     struct page_info *pg;
 
-    if ( mfn > max_page )
+    if ( mfn_valid(mfn) )
     {
         dprintk(XENLOG_WARNING,
                 "try to offline page out of range %lx\n", mfn);
@@ -694,7 +696,7 @@ unsigned int online_page(unsigned long m
     struct page_info *pg;
     int ret;
 
-    if ( mfn > max_page )
+    if ( !mfn_valid(mfn) )
     {
         dprintk(XENLOG_WARNING, "call expand_pages() first\n");
         return -EINVAL;
@@ -745,7 +747,7 @@ int query_page_offline(unsigned long mfn
 {
     struct page_info *pg;
 
-    if ( (mfn > max_page) || !page_is_ram_type(mfn, RAM_TYPE_CONVENTIONAL) )
+    if ( !mfn_valid(mfn) || !page_is_ram_type(mfn, RAM_TYPE_CONVENTIONAL) )
     {
         dprintk(XENLOG_WARNING, "call expand_pages() first\n");
         return -EINVAL;
@@ -886,7 +888,7 @@ void __init scrub_heap_pages(void)
         pg = mfn_to_page(mfn);
 
         /* Quick lock-free check. */
-        if ( !page_state_is(pg, free) )
+        if ( !mfn_valid(mfn) || !page_state_is(pg, free) )
             continue;
 
         /* Every 100MB, print a progress dot. */

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog

<Prev in Thread] Current Thread [Next in Thread>
  • [Xen-changelog] [xen-unstable] page allocator: add mfn_valid() check to free_heap_pages() and scrub_pages(), Xen patchbot-unstable <=