[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 2/9] x86: introduce a new set of APIs to manage Xen page tables


On 04/12/2019 17:10, Hongyan Xia wrote:
From: Wei Liu <wei.liu2@xxxxxxxxxx>

We are going to switch to using domheap page for page tables.
A new set of APIs is introduced to allocate, map, unmap and free pages
for page tables.

The allocation and deallocation work on mfn_t but not page_info,
because they are required to work even before frame table is set up.

Implement the old functions with the new ones. We will rewrite, site
by site, other mm functions that manipulate page tables to use the new

Note these new APIs still use xenheap page underneath and no actual
map and unmap is done so that we don't break xen half way. They will
be switched to use domheap and dynamic mappings when usage of old APIs
is eliminated.

No functional change intended in this patch.

Signed-off-by: Wei Liu <wei.liu2@xxxxxxxxxx>
Signed-off-by: Hongyan Xia <hongyxia@xxxxxxxxxx>

Changed since v3:
- const qualify unmap_xen_pagetable_new().
- remove redundant parentheses.
  xen/arch/x86/mm.c        | 39 ++++++++++++++++++++++++++++++++++-----
  xen/include/asm-x86/mm.h | 11 +++++++++++
  2 files changed, 45 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 7d4dd80a85..ca362ad638 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -119,6 +119,7 @@
  #include <xen/efi.h>
  #include <xen/grant_table.h>
  #include <xen/hypercall.h>
+#include <xen/mm.h>
  #include <asm/paging.h>
  #include <asm/shadow.h>
  #include <asm/page.h>
@@ -5020,22 +5021,50 @@ int mmcfg_intercept_write(
void *alloc_xen_pagetable(void)
+    mfn_t mfn;
+    mfn = alloc_xen_pagetable_new();
+    ASSERT(!mfn_eq(mfn, INVALID_MFN));

The current version of alloc_xen_pagetable() may return NULL. So I can't see why this would not happen with the new implementation (see more below).

Furthermore, AFAIK, mfn_to_virt() is not able to deal with INVALID_MFN. While the ASSERT will catch such error in debug build, non-debug build will end up to undefined behavior (allocation failure is a real possibility).

Regardless the discussion on whether the ASSERT() is warrant, I think you want to have:

if ( mfn_eq(mfn, INVALID_MFN) )
   return NULL;

I am half tempted to suggest to put that in map_xen_pagetable_new() for hardening.

+    return map_xen_pagetable_new(mfn);
+void free_xen_pagetable(void *v)
+    if ( system_state != SYS_STATE_early_boot )
+        free_xen_pagetable_new(virt_to_mfn(v));
+mfn_t alloc_xen_pagetable_new(void)
      if ( system_state != SYS_STATE_early_boot )
          void *ptr = alloc_xenheap_page();
BUG_ON(!hardware_domain && !ptr);

This BUG_ON() only catch memory exhaustion before the Hardware Domain (aka Dom0) is created. Afterwards, the pointer may be NULL. As ...

-        return ptr;
+        return virt_to_mfn(ptr);

virt_to_mfn() requires a valid MFN, you will end up to undefined behavior.

- return mfn_to_virt(mfn_x(alloc_boot_pages(1, 1)));
+    return alloc_boot_pages(1, 1);
-void free_xen_pagetable(void *v)
+void *map_xen_pagetable_new(mfn_t mfn)
-    if ( system_state != SYS_STATE_early_boot )
-        free_xenheap_page(v);
+    return mfn_to_virt(mfn_x(mfn));
+/* v can point to an entry within a table or be NULL */
+void unmap_xen_pagetable_new(const void *v)
+    /* XXX still using xenheap page, no need to do anything.  */
+/* mfn can be INVALID_MFN */
+void free_xen_pagetable_new(mfn_t mfn)
+    if ( system_state != SYS_STATE_early_boot && !mfn_eq(mfn, INVALID_MFN) )
+        free_xenheap_page(mfn_to_virt(mfn_x(mfn)));
static DEFINE_SPINLOCK(map_pgdir_lock);
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 9d2b833579..76593fe9e7 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -582,6 +582,17 @@ void *do_page_walk(struct vcpu *v, unsigned long addr);
  /* Allocator functions for Xen pagetables. */
  void *alloc_xen_pagetable(void);
  void free_xen_pagetable(void *v);
+mfn_t alloc_xen_pagetable_new(void);
+void *map_xen_pagetable_new(mfn_t mfn);
+void unmap_xen_pagetable_new(const void *v);
+void free_xen_pagetable_new(mfn_t mfn);
+#define UNMAP_XEN_PAGETABLE_NEW(ptr)    \
+    do {                                \
+        unmap_xen_pagetable_new(ptr);   \
+        (ptr) = NULL;                   \
+    } while (0)
  l1_pgentry_t *virt_to_xen_l1e(unsigned long v);
int __sync_local_execstate(void);

Julien Grall

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.