[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 03/12] arm/mem_access: Add defines supporting PTs with varying page sizes



Hi Sergej,

On 06/27/2017 12:52 PM, Sergej Proskurin wrote:
The ARMv8 architecture supports pages with different (4K, 16K, and 64K) sizes.
To enable guest page table walks for various configurations, this commit
extends the defines and helpers of the current implementation.

Signed-off-by: Sergej Proskurin <proskurin@xxxxxxxxxxxxx>
---
Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>
Cc: Julien Grall <julien.grall@xxxxxxx>
---
v3: Eliminate redundant macro definitions by introducing generic macros.

v4: Replace existing macros with ones that generate static inline
     helpers as to ease the readability of the code.

     Move the introduced code into lpae.h

v5: Remove PAGE_SHIFT_* defines from lpae.h as we import them now from
     the header xen/lib.h.

     Remove *_guest_table_offset macros as to reduce the number of
     exported macros which are only used once. Instead, use the
     associated functionality directly within the
     GUEST_TABLE_OFFSET_HELPERS.

     Add comment in GUEST_TABLE_OFFSET_HELPERS stating that a page table
     with 64K page size granularity does not have a zeroeth lookup level.

     Add #undefs for GUEST_TABLE_OFFSET and GUEST_TABLE_OFFSET_HELPERS.

     Remove CONFIG_ARM_64 #defines.
---
  xen/include/asm-arm/lpae.h | 62 ++++++++++++++++++++++++++++++++++++++++++++++
  1 file changed, 62 insertions(+)

diff --git a/xen/include/asm-arm/lpae.h b/xen/include/asm-arm/lpae.h
index 6fbf7c606c..2f7891ed0b 100644
--- a/xen/include/asm-arm/lpae.h
+++ b/xen/include/asm-arm/lpae.h
@@ -3,6 +3,8 @@
#ifndef __ASSEMBLY__ +#include <xen/lib.h>
+
  /*
   * WARNING!  Unlike the x86 pagetable code, where l1 is the lowest level and
   * l4 is the root of the trie, the ARM pagetables follow ARM's documentation:
@@ -151,6 +153,66 @@ static inline bool lpae_is_superpage(lpae_t pte, unsigned 
int level)
      return (level < 3) && lpae_mapping(pte);
  }
+/*
+ * The ARMv8 architecture supports pages with different sizes (4K, 16K, and
+ * 64K). To enable guest page table walks for various configurations, the
+ * following helpers enable walking the guest's translation table with varying
+ * page size granularities.
+ */
+
+#define LPAE_SHIFT_4K           (9)
+#define LPAE_SHIFT_16K          (11)
+#define LPAE_SHIFT_64K          (13)
+
+#define lpae_entries(gran)      (_AC(1,U) << LPAE_SHIFT_##gran)
+#define lpae_entry_mask(gran)   (lpae_entries(gran) - 1)
+
+#define third_shift(gran)       (PAGE_SHIFT_##gran)
+#define third_size(gran)        ((paddr_t)1 << third_shift(gran))
+
+#define second_shift(gran)      (third_shift(gran) + LPAE_SHIFT_##gran)
+#define second_size(gran)       ((paddr_t)1 << second_shift(gran))
+
+#define first_shift(gran)       (second_shift(gran) + LPAE_SHIFT_##gran)
+#define first_size(gran)        ((paddr_t)1 << first_shift(gran))
+
+/* Note that there is no zeroeth lookup level with a 64K granule size. */
+#define zeroeth_shift(gran)     (first_shift(gran) + LPAE_SHIFT_##gran)
+#define zeroeth_size(gran)      ((paddr_t)1 << zeroeth_shift(gran))
+
+#define GUEST_TABLE_OFFSET(offs, gran)          ((paddr_t)(offs) & 
lpae_entry_mask(gran))
+#define GUEST_TABLE_OFFSET_HELPERS(gran)                                       
         \
+static inline vaddr_t third_guest_table_offset_##gran##K(vaddr_t gva)          
         \

Sorry I haven't spot it before. This is not going to work properly on 32-bit if you use vaddr_t. Indeed, input for stage-2 page-table (i.e IPA) will be 40-bit. But vaddr_t is 32-bit. So you to use paddr_t here and in all the helpers below.

+{                                                                              
         \
+    return GUEST_TABLE_OFFSET((gva >> third_shift(gran##K)), gran##K);         
         \
+}                                                                              
         \
+                                                                               
         \
+static inline vaddr_t second_guest_table_offset_##gran##K(vaddr_t gva)         
         \
+{                                                                              
         \
+    return GUEST_TABLE_OFFSET((gva >> second_shift(gran##K)), gran##K);        
         \
+}                                                                              
         \
+                                                                               
         \
+static inline vaddr_t first_guest_table_offset_##gran##K(vaddr_t gva)          
         \
+{                                                                              
         \
+    return GUEST_TABLE_OFFSET(((paddr_t)gva >> first_shift(gran##K)), 
gran##K);         \
+}                                                                              
         \
+                                                                               
         \
+static inline vaddr_t zeroeth_guest_table_offset_##gran##K(vaddr_t gva)        
         \
+{                                                                              
         \
+    /* Note that there is no zeroeth lookup level with a 64K granule size. */  
         \
+    if ( gran == 64 )                                                          
         \
+        return 0;                                                              
         \
+    else                                                                       
         \
+        return GUEST_TABLE_OFFSET(((paddr_t)gva >> zeroeth_shift(gran##K)), 
gran##K);   \
+}                                                                              
         \
+
+GUEST_TABLE_OFFSET_HELPERS(4);
+GUEST_TABLE_OFFSET_HELPERS(16);
+GUEST_TABLE_OFFSET_HELPERS(64);
+
+#undef GUEST_TABLE_OFFSET
+#undef GUEST_TABLE_OFFSET_HELPERS
+
  #endif /* __ASSEMBLY__ */
/*


--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.