[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH] Add broken memory type in p2m table



Add broken memory type in p2m table

In some platform, whole system will crash if a broken memory is accessed, no 
matter this access is from guest, or hypervisor. This may cause issue if 
mallicious guest access this memory. To avoid this, we need to guard the access 
from the guest. Morever, we need make sure host will not access the memory for 
guest, like when do instruction emulation.

This patch is to guard EPT guest's access. A new broken memory type is added. 
Because the ept_p2m_type_to_flags() will mark default type as r/w/x bit as 
zero, a broken memory type will have non-present EPT entry, thus guest's access 
will cause EPT violation VM Exit.

In Xen hypervisor's vmexit ept violation handler, when it try to translate the 
gpfn to mfn through p2m_guest query type, this patch will cause domain to be 
crashed.

The changes to __gfn_to_mfn_type will return INVALID_MFN for broken memory 
type, to avoid the crash caused by hypervisor access if caller check the return 
value. It is complex to test this scenerior in EPT situation, since mostly EPT 
violation will happen before hypervisor will access the broken memory, but to 
be helpful for other memory type.

Signed-off-by: Jiang, Yunhong <yunhong.jiang@xxxxxxxxx>
Acked-by: Tim Deegan <Tim.Deegan@xxxxxxxxxx>

diff -r b77fd3189850 xen/include/asm-x86/p2m.h
--- a/xen/include/asm-x86/p2m.h Sun Sep 12 16:36:23 2010 +0800
+++ b/xen/include/asm-x86/p2m.h Mon Sep 13 05:53:36 2010 +0800
@@ -85,6 +85,7 @@ typedef enum {
     p2m_ram_paging_in = 11,       /* Memory that is being paged in */
     p2m_ram_paging_in_start = 12, /* Memory that is being paged in */
     p2m_ram_shared = 13,          /* Shared or sharable memory */
+    p2m_ram_broken  =14,          /* Broken page, access cause domain crash */
 } p2m_type_t;
 
 typedef enum {
@@ -138,6 +139,7 @@ typedef enum {
  * reinit the type correctly after fault */
 #define P2M_SHARABLE_TYPES (p2m_to_mask(p2m_ram_rw))
 #define P2M_SHARED_TYPES   (p2m_to_mask(p2m_ram_shared))
+#define P2M_BROKEN_TYPES (p2m_to_mask(p2m_ram_broken))
 
 /* Useful predicates */
 #define p2m_is_ram(_t) (p2m_to_mask(_t) & P2M_RAM_TYPES)
@@ -155,7 +157,7 @@ typedef enum {
 #define p2m_is_paged(_t)    (p2m_to_mask(_t) & P2M_PAGED_TYPES)
 #define p2m_is_sharable(_t) (p2m_to_mask(_t) & P2M_SHARABLE_TYPES)
 #define p2m_is_shared(_t)   (p2m_to_mask(_t) & P2M_SHARED_TYPES)
-
+#define p2m_is_broken(_t)   (p2m_to_mask(_t) & P2M_BROKEN_TYPES)
 
 /* Populate-on-demand */
 #define POPULATE_ON_DEMAND_MFN  (1<<9)
@@ -306,17 +308,31 @@ static inline mfn_t _gfn_to_mfn_type(str
                                      unsigned long gfn, p2m_type_t *t,
                                      p2m_query_t q)
 {
+    mfn_t mfn;
+
     if ( !p2m || !paging_mode_translate(p2m->domain) )
     {
         /* Not necessarily true, but for non-translated guests, we claim
          * it's the most generic kind of memory */
         *t = p2m_ram_rw;
-        return _mfn(gfn);
+        mfn = _mfn(gfn);
     }
-    if ( likely(current->domain == p2m->domain) )
-        return gfn_to_mfn_type_current(p2m, gfn, t, q);
+    else if ( likely(current->domain == p2m->domain) )
+        mfn = gfn_to_mfn_type_current(p2m, gfn, t, q);
     else
-        return gfn_to_mfn_type_p2m(p2m, gfn, t, q);
+        mfn = gfn_to_mfn_type_p2m(p2m, gfn, t, q);
+
+#ifdef __x86_64__
+    if (unlikely((p2m_is_broken(*t))))
+    {
+        /* Return invalid_mfn to avoid caller's access */
+        mfn = _mfn(INVALID_MFN);
+        if (q == p2m_guest)
+            domain_crash(p2m->domain);
+    }
+#endif
+
+    return mfn;
 }
 
 #define gfn_to_mfn(p2m, g, t) _gfn_to_mfn_type((p2m), (g), (t), p2m_alloc)


Attachment: p2mt.patch
Description: p2mt.patch

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.