[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH] page allocator: add mfn_valid() check to free_heap_pages() and scrub_pages()



page allocator: add mfn_valid() check to free_heap_pages() and scrub_pages()

The changesets, 19913:ef38784f9f85 and 19914:d6c1d7992f43 eliminates
boot allocator bitmap which is also used for buddy allocator bitmap.
With those patches, xen/ia64 doesn't boot because page allocator
touches struct page_info which doesn't exist.
That happends because memory is populated sparsely on ia64
and struct page_info is so.

This patches fixes ia64 boot failure.
In fact, this is also a potential bug on x86. max_page seems
to be well aligned so that MAX_ORDER loop check prevented
to be bug appear.

- fix free_heap_pages().
  When merging chunks, buddy page_info() doesn't always exists.
  So check it by mfn_valid().

- fix scrub_pages()
  On ia64 page_info() is sparsely populated, so struct page_info 
  doesn't always exist. Check it by mfn_valid()

- offline_pages(), online_pages() and query_page_offline()
  Also replace "< max_page" check with mfn_valid() for consistency.

Signed-off-by: Isaku Yamahata <yamahata@xxxxxxxxxxxxx>

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -507,7 +507,8 @@ static void free_heap_pages(
         if ( (page_to_mfn(pg) & mask) )
         {
             /* Merge with predecessor block? */
-            if ( !page_state_is(pg-mask, free) ||
+            if ( !mfn_valid(page_to_mfn(pg-mask)) ||
+                 !page_state_is(pg-mask, free) ||
                  (PFN_ORDER(pg-mask) != order) )
                 break;
             pg -= mask;
@@ -516,7 +517,8 @@ static void free_heap_pages(
         else
         {
             /* Merge with successor block? */
-            if ( !page_state_is(pg+mask, free) ||
+            if ( !mfn_valid(page_to_mfn(pg+mask)) ||
+                 !page_state_is(pg+mask, free) ||
                  (PFN_ORDER(pg+mask) != order) )
                 break;
             page_list_del(pg + mask, &heap(node, zone, order));
@@ -608,7 +610,7 @@ int offline_page(unsigned long mfn, int 
     int ret = 0;
     struct page_info *pg;
 
-    if ( mfn > max_page )
+    if ( mfn_valid(mfn) )
     {
         dprintk(XENLOG_WARNING,
                 "try to offline page out of range %lx\n", mfn);
@@ -694,7 +696,7 @@ unsigned int online_page(unsigned long m
     struct page_info *pg;
     int ret;
 
-    if ( mfn > max_page )
+    if ( !mfn_valid(mfn) )
     {
         dprintk(XENLOG_WARNING, "call expand_pages() first\n");
         return -EINVAL;
@@ -745,7 +747,7 @@ int query_page_offline(unsigned long mfn
 {
     struct page_info *pg;
 
-    if ( (mfn > max_page) || !page_is_ram_type(mfn, RAM_TYPE_CONVENTIONAL) )
+    if ( !mfn_valid(mfn) || !page_is_ram_type(mfn, RAM_TYPE_CONVENTIONAL) )
     {
         dprintk(XENLOG_WARNING, "call expand_pages() first\n");
         return -EINVAL;
@@ -886,7 +888,7 @@ void __init scrub_heap_pages(void)
         pg = mfn_to_page(mfn);
 
         /* Quick lock-free check. */
-        if ( !page_state_is(pg, free) )
+        if ( !mfn_valid(mfn) || !page_state_is(pg, free) )
             continue;
 
         /* Every 100MB, print a progress dot. */


-- 
yamahata

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.