WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] [RFC]vmx: Enable direct hardware n/p fault injection

To: "Tim Deegan" <Tim.Deegan@xxxxxxxxxx>
Subject: [Xen-devel] [RFC]vmx: Enable direct hardware n/p fault injection
From: "Tian, Kevin" <kevin.tian@xxxxxxxxx>
Date: Mon, 4 Feb 2008 15:24:17 +0800
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Sun, 03 Feb 2008 23:26:31 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Achm/vOcH/DM6DnwSBajAKWut6UivQ==
Thread-topic: [RFC]vmx: Enable direct hardware n/p fault injection
Hi, Tim,

As stated in the subject, I'd like to request for comments:
    * Whether is this feature nice to have?
    * Any corner cases I may miss, if applausive?

Two known issues to be solved:
1. Since this feature is the replacement to existing fast n/p
path, it's also guarded by SHOPT_FAST_FAULT_PATH.
However this option is not exported to outside components,
and thus how can vmcs.c decide whether to enable such
feature upon shadow setting? We may need some hvm
hook invoked when shadow is initialized per shadow option.
Currently this hw feature is forced to on from vmcs.c.

2. The keypoint of this patch is to only clear shadow present
bit if guest does, which means all unsynced shadow entries
to be set with reserved bit magic, to avoid false injection into
guest. One remaining issue is for 3-on-3 pae guest, if guest
doesn't set present bit for all 4 L3 entries (normally it
wouldn't). If such condition happens, we have to still allocate
dummy pages with reserved magic L2e filled, instead of filling
L3 entry directly with reserved magic which failed to pass
check at vmentry. It's not done yet.

Thanks,
Kevin
----
vmx: Enable direct hardware n/p fault injection

VT-x supports direct reflect for some type of page faults,
without triggering VM-exit. At least we can utilize this
feature to accelerate guest n/p fault. The key thing is
to only clear shadow present bit if guest entry really does.
All other cases are considered out-of-sync (oos) with guest
which thus are all set with a special magic with reserved
bit set.

Effectively this just replaces fast n/p injection at same
places, and is also controlled by SHOPT_FAST_FAULT_PATH.
Thus the gain of this change is straightforward to release
vmexit/vmentry overhead for existing fast n/p path, though
it's already 'fast', with negligible overhead on other
paths when checking shadow flags.

Our test shows no obvious increase for some benchmark
scores, then with 0.9% improvement for PAE KB, and 0.7%
for PAE windows sysbench. It depends on frequency of
guest n/p fault.

Signed-off-by Kevin Tian <kevin.tian@xxxxxxxxx>

diff -r ad0f20f5590a xen/arch/x86/hvm/vmx/vmcs.c
--- a/xen/arch/x86/hvm/vmx/vmcs.c       Fri Dec 28 15:44:51 2007 +0000
+++ b/xen/arch/x86/hvm/vmx/vmcs.c       Wed Jan 02 09:26:53 2008 +0800
@@ -512,8 +512,16 @@ static int construct_vmcs(struct vcpu *v
     __vmwrite(CR0_GUEST_HOST_MASK, ~0UL);
     __vmwrite(CR4_GUEST_HOST_MASK, ~0UL);
 
+#if CONFIG_PAGING_LEVELS > 2
+    /* Let guest n/p fault not VM-exit anyway */
+    __vmwrite(PAGE_FAULT_ERROR_CODE_MASK, 1);
+    __vmwrite(PAGE_FAULT_ERROR_CODE_MATCH, 1);
+    if ( !v->vcpu_id )
+        v->domain->arch.paging.hw_np_inject = 1;
+#else
     __vmwrite(PAGE_FAULT_ERROR_CODE_MASK, 0);
     __vmwrite(PAGE_FAULT_ERROR_CODE_MATCH, 0);
+#endif
 
     __vmwrite(CR3_TARGET_COUNT, 0);
 
diff -r ad0f20f5590a xen/arch/x86/hvm/vmx/vmx.c
--- a/xen/arch/x86/hvm/vmx/vmx.c        Fri Dec 28 15:44:51 2007 +0000
+++ b/xen/arch/x86/hvm/vmx/vmx.c        Wed Jan 02 13:49:20 2008 +0800
@@ -2734,6 +2734,13 @@ asmlinkage void vmx_vmexit_handler(struc
 
     perfc_incra(vmexits, exit_reason);
 
+    /* If hardware plays partial role to inject guest page fault
+     * directly, cr2 has to be saved early in case of clobbered
+     * by xen activity
+     */
+    if ( paging_domain_hw_np_injection(v->domain) )
+        v->arch.hvm_vcpu.guest_cr[2] = read_cr2();
+
     if ( exit_reason != EXIT_REASON_EXTERNAL_INTERRUPT )
         local_irq_enable();
 
diff -r ad0f20f5590a xen/arch/x86/mm/shadow/common.c
--- a/xen/arch/x86/mm/shadow/common.c   Fri Dec 28 15:44:51 2007 +0000
+++ b/xen/arch/x86/mm/shadow/common.c   Tue Jan 08 14:36:13 2008 +0800
@@ -982,7 +982,18 @@ mfn_t shadow_alloc(struct domain *d,
         /* Now safe to clear the page for reuse */
         p = sh_map_domain_page(shadow_page_to_mfn(sp+i));
         ASSERT(p != NULL);
-        clear_page(p);
+#if CONFIG_PAGING_LEVELS > 2
+        /* For guest with hardware support for direct n/p injection,
+         * shadow page should be initialized to a special pattern to
+         * ensure only validated guest n/p entry causing direct
injection
+         */
+        if ( paging_domain_hw_np_injection(d)
+             && (shadow_type >= SH_type_l1_32_shadow)
+             && (shadow_type <= SH_type_max_shadow) )
+            memset(p, -1, PAGE_SIZE);
+        else
+#endif
+            clear_page(p);
         sh_unmap_domain_page(p);
         INIT_LIST_HEAD(&sp[i].list);
         sp[i].type = shadow_type;
diff -r ad0f20f5590a xen/arch/x86/mm/shadow/multi.c
--- a/xen/arch/x86/mm/shadow/multi.c    Fri Dec 28 15:44:51 2007 +0000
+++ b/xen/arch/x86/mm/shadow/multi.c    Thu Jan 10 15:09:04 2008 +0800
@@ -691,7 +691,7 @@ _sh_propagate(struct vcpu *v,
     /* Check there's something for the shadows to map to */
     if ( !p2m_is_valid(p2mt) )
     {
-        *sp = shadow_l1e_empty();
+        *sp = shadow_l1e_oos(v);
         goto done;
     }
 
@@ -699,12 +699,22 @@ _sh_propagate(struct vcpu *v,
 
     if ( unlikely(!(gflags & _PAGE_PRESENT)) )
     {
-        /* If a guest l1 entry is not present, shadow with the magic 
-         * guest-not-present entry. */
-        if ( level == 1 )
-            *sp = sh_l1e_gnp();
-        else 
-            *sp = shadow_l1e_empty();
+        if ( !paging_domain_hw_np_injection(d) )
+        {
+            /* If a guest l1 entry is not present, shadow with the
magic 
+             * guest-not-present entry. */
+            if ( level == 1 )
+                *sp = sh_l1e_gnp();
+            else 
+                *sp = shadow_l1e_empty();
+        }
+        /* For HVM with direct n/p injection hw support, just set the
+         * entry empty to activate the effect. But p2m n/p should be
+         * seperately handled here
+         */
+        else
+            *sp = shadow_l1e_oos(v);
+
         goto done;
     }
 
@@ -727,7 +737,17 @@ _sh_propagate(struct vcpu *v,
                              || p2mt == p2m_mmio_direct)) )
     {
         ASSERT((ft == ft_prefetch));
-        *sp = shadow_l1e_empty();
+        *sp = shadow_l1e_oos(v);
+        goto done;
+    }
+
+    // If the A or D bit has not yet been set in the guest, then we
must
+    // prevent the corresponding kind of access. No need for further
+    // check and just exit early.
+    //
+    if ( unlikely(!(gflags & _PAGE_ACCESSED)) )
+    {
+        *sp = shadow_l1e_oos(v);
         goto done;
     }
 
@@ -774,12 +794,6 @@ _sh_propagate(struct vcpu *v,
     if ( (level > 1) && !((SHADOW_PAGING_LEVELS == 3) && (level == 3))
)
         sflags |= _PAGE_ACCESSED | _PAGE_DIRTY;
 
-    // If the A or D bit has not yet been set in the guest, then we
must
-    // prevent the corresponding kind of access.
-    //
-    if ( unlikely(!(gflags & _PAGE_ACCESSED)) )
-        sflags &= ~_PAGE_PRESENT;
-
     /* D bits exist in L1es and PSE L2es */
     if ( unlikely(((level == 1) ||
                    ((level == 2) &&
@@ -818,7 +832,8 @@ _sh_propagate(struct vcpu *v,
             // if we are trapping both reads & writes, then mark this
page
             // as not present...
             //
-            sflags &= ~_PAGE_PRESENT;
+            *sp = shadow_l1e_oos(v);
+            goto done;
         }
         else
         {
@@ -1195,13 +1210,12 @@ static int shadow_set_l1e(struct vcpu *v
     struct domain *d = v->domain;
     shadow_l1e_t old_sl1e;
     ASSERT(sl1e != NULL);
-    
+
     old_sl1e = *sl1e;
 
     if ( old_sl1e.l1 == new_sl1e.l1 ) return 0; /* Nothing to do */
     
-    if ( (shadow_l1e_get_flags(new_sl1e) & _PAGE_PRESENT)
-         && !sh_l1e_is_magic(new_sl1e) ) 
+    if ( shadow_l1e_get_flags(new_sl1e) & _PAGE_PRESENT )
     {
         /* About to install a new reference */        
         if ( shadow_mode_refcounts(d) ) {
@@ -1209,7 +1223,7 @@ static int shadow_set_l1e(struct vcpu *v
             {
                 /* Doesn't look like a pagetable. */
                 flags |= SHADOW_SET_ERROR;
-                new_sl1e = shadow_l1e_empty();
+                new_sl1e = shadow_l1e_oos(v);
             }
         }
     } 
@@ -1218,8 +1232,7 @@ static int shadow_set_l1e(struct vcpu *v
     shadow_write_entries(sl1e, &new_sl1e, 1, sl1mfn);
     flags |= SHADOW_SET_CHANGED;
 
-    if ( (shadow_l1e_get_flags(old_sl1e) & _PAGE_PRESENT) 
-         && !sh_l1e_is_magic(old_sl1e) )
+    if ( shadow_l1e_get_flags(old_sl1e) & _PAGE_PRESENT ) 
     {
         /* We lost a reference to an old mfn. */
         /* N.B. Unlike higher-level sets, never need an extra flush 
@@ -2164,8 +2177,7 @@ void sh_destroy_l1_shadow(struct vcpu *v
         /* Decrement refcounts of all the old entries */
         mfn_t sl1mfn = smfn; 
         SHADOW_FOREACH_L1E(sl1mfn, sl1e, 0, 0, {
-            if ( (shadow_l1e_get_flags(*sl1e) & _PAGE_PRESENT)
-                 && !sh_l1e_is_magic(*sl1e) )
+            if ( shadow_l1e_get_flags(*sl1e) & _PAGE_PRESENT )
                 shadow_put_page_from_l1e(*sl1e, d);
         });
     }
@@ -2227,7 +2239,7 @@ void sh_unhook_32b_mappings(struct vcpu 
 {    
     shadow_l2e_t *sl2e;
     SHADOW_FOREACH_L2E(sl2mfn, sl2e, 0, 0, v->domain, {
-        (void) shadow_set_l2e(v, sl2e, shadow_l2e_empty(), sl2mfn);
+        (void) shadow_set_l2e(v, sl2e, shadow_l2e_oos(v), sl2mfn);
     });
 }
 
@@ -2238,7 +2250,7 @@ void sh_unhook_pae_mappings(struct vcpu 
 {
     shadow_l2e_t *sl2e;
     SHADOW_FOREACH_L2E(sl2mfn, sl2e, 0, 0, v->domain, {
-        (void) shadow_set_l2e(v, sl2e, shadow_l2e_empty(), sl2mfn);
+        (void) shadow_set_l2e(v, sl2e, shadow_l2e_oos(v), sl2mfn);
     });
 }
 
@@ -2248,7 +2260,7 @@ void sh_unhook_64b_mappings(struct vcpu 
 {
     shadow_l4e_t *sl4e;
     SHADOW_FOREACH_L4E(sl4mfn, sl4e, 0, 0, v->domain, {
-        (void) shadow_set_l4e(v, sl4e, shadow_l4e_empty(), sl4mfn);
+        (void) shadow_set_l4e(v, sl4e, shadow_l4e_oos(v), sl4mfn);
     });
 }
 
@@ -2654,7 +2666,7 @@ static void sh_prefetch(struct vcpu *v, 
     for ( i = 1; i < dist ; i++ ) 
     {
         /* No point in prefetching if there's already a shadow */
-        if ( ptr_sl1e[i].l1 != 0 )
+        if ( (ptr_sl1e[i].l1 != 0) && !sh_l1e_is_oos(v, ptr_sl1e[i]) )
             break;
 
         if ( mfn_valid(gw->l1mfn) )
@@ -2738,7 +2750,7 @@ static int sh_page_fault(struct vcpu *v,
                                       (sh_linear_l1_table(v) 
                                        + shadow_l1_linear_offset(va)),
                                       sizeof(sl1e)) == 0)
-                    && sh_l1e_is_magic(sl1e)) )
+                    && !sh_l1e_is_oos(v, sl1e) &&
sh_l1e_is_magic(sl1e)) )
         {
             if ( sh_l1e_is_gnp(sl1e) )
             {
@@ -2765,6 +2777,14 @@ static int sh_page_fault(struct vcpu *v,
             handle_mmio(gpa);
             return EXCRET_fault_fixed;
         }
+        else if ( paging_domain_hw_np_injection(d) )
+        {
+            /* For guest with hardware support for direct n/p fault
injection,
+             * reserved bit may be set due to higher level out-of-sync
entry,
+             * and further handle is required.
+             */
+            regs->error_code ^= PFEC_reserved_bit | PFEC_page_present;
+        }
         else
         {
             /* This should be exceptionally rare: another vcpu has
fixed
@@ -2793,7 +2813,7 @@ static int sh_page_fault(struct vcpu *v,
     
     shadow_audit_tables(v);
     
-    if ( guest_walk_tables(v, va, &gw, regs->error_code, 1) != 0 )
+    if ( guest_walk_tables(v, va, &gw, regs->error_code, 1) != 0)
     {
         perfc_incr(shadow_fault_bail_real_fault);
         goto not_a_shadow_fault;
@@ -3878,7 +3898,7 @@ int sh_rm_mappings_from_l1(struct vcpu *
         if ( (flags & _PAGE_PRESENT) 
              && (mfn_x(shadow_l1e_get_mfn(*sl1e)) == mfn_x(target_mfn))
)
         {
-            (void) shadow_set_l1e(v, sl1e, shadow_l1e_empty(), sl1mfn);
+            (void) shadow_set_l1e(v, sl1e, shadow_l1e_oos(v), sl1mfn);
             if ( (mfn_to_page(target_mfn)->count_info & PGC_count_mask)
== 0 )
                 /* This breaks us cleanly out of the FOREACH macro */
                 done = 1;
@@ -3896,17 +3916,17 @@ void sh_clear_shadow_entry(struct vcpu *
     switch ( mfn_to_shadow_page(smfn)->type )
     {
     case SH_type_l1_shadow:
-        (void) shadow_set_l1e(v, ep, shadow_l1e_empty(), smfn); break;
+        (void) shadow_set_l1e(v, ep, shadow_l1e_oos(v), smfn); break;
     case SH_type_l2_shadow:
 #if GUEST_PAGING_LEVELS >= 3
     case SH_type_l2h_shadow:
 #endif
-        (void) shadow_set_l2e(v, ep, shadow_l2e_empty(), smfn); break;
+        (void) shadow_set_l2e(v, ep, shadow_l2e_oos(v), smfn); break;
 #if GUEST_PAGING_LEVELS >= 4
     case SH_type_l3_shadow:
-        (void) shadow_set_l3e(v, ep, shadow_l3e_empty(), smfn); break;
+        (void) shadow_set_l3e(v, ep, shadow_l3e_oos(v), smfn); break;
     case SH_type_l4_shadow:
-        (void) shadow_set_l4e(v, ep, shadow_l4e_empty(), smfn); break;
+        (void) shadow_set_l4e(v, ep, shadow_l4e_oos(v), smfn); break;
 #endif
     default: BUG(); /* Called with the wrong kind of shadow. */
     }
@@ -3925,7 +3945,7 @@ int sh_remove_l1_shadow(struct vcpu *v, 
         if ( (flags & _PAGE_PRESENT) 
              && (mfn_x(shadow_l2e_get_mfn(*sl2e)) == mfn_x(sl1mfn)) )
         {
-            (void) shadow_set_l2e(v, sl2e, shadow_l2e_empty(), sl2mfn);
+            (void) shadow_set_l2e(v, sl2e, shadow_l2e_oos(v), sl2mfn);
             if ( mfn_to_shadow_page(sl1mfn)->type == 0 )
                 /* This breaks us cleanly out of the FOREACH macro */
                 done = 1;
@@ -3948,7 +3968,7 @@ int sh_remove_l2_shadow(struct vcpu *v, 
         if ( (flags & _PAGE_PRESENT) 
              && (mfn_x(shadow_l3e_get_mfn(*sl3e)) == mfn_x(sl2mfn)) )
         {
-            (void) shadow_set_l3e(v, sl3e, shadow_l3e_empty(), sl3mfn);
+            (void) shadow_set_l3e(v, sl3e, shadow_l3e_oos(v), sl3mfn);
             if ( mfn_to_shadow_page(sl2mfn)->type == 0 )
                 /* This breaks us cleanly out of the FOREACH macro */
                 done = 1;
@@ -3970,7 +3990,7 @@ int sh_remove_l3_shadow(struct vcpu *v, 
         if ( (flags & _PAGE_PRESENT) 
              && (mfn_x(shadow_l4e_get_mfn(*sl4e)) == mfn_x(sl3mfn)) )
         {
-            (void) shadow_set_l4e(v, sl4e, shadow_l4e_empty(), sl4mfn);
+            (void) shadow_set_l4e(v, sl4e, shadow_l4e_oos(v), sl4mfn);
             if ( mfn_to_shadow_page(sl3mfn)->type == 0 )
                 /* This breaks us cleanly out of the FOREACH macro */
                 done = 1;
@@ -4418,8 +4438,7 @@ int sh_audit_fl1_table(struct vcpu *v, m
         if ( !(f == 0 
                || f == (_PAGE_PRESENT|_PAGE_USER|_PAGE_RW|
                         _PAGE_ACCESSED|_PAGE_DIRTY) 
-               || f ==
(_PAGE_PRESENT|_PAGE_USER|_PAGE_ACCESSED|_PAGE_DIRTY)
-               || sh_l1e_is_magic(*sl1e)) )
+               || f ==
(_PAGE_PRESENT|_PAGE_USER|_PAGE_ACCESSED|_PAGE_DIRTY) )
             AUDIT_FAIL(1, "fl1e has bad flags");
     });
     return 0;
diff -r ad0f20f5590a xen/arch/x86/mm/shadow/types.h
--- a/xen/arch/x86/mm/shadow/types.h    Fri Dec 28 15:44:51 2007 +0000
+++ b/xen/arch/x86/mm/shadow/types.h    Wed Jan 02 09:32:13 2008 +0800
@@ -119,19 +119,6 @@ static inline mfn_t shadow_l3e_get_mfn(s
 #if SHADOW_PAGING_LEVELS >= 4
 static inline mfn_t shadow_l4e_get_mfn(shadow_l4e_t sl4e)
 { return _mfn(l4e_get_pfn(sl4e)); }
-#endif
-#endif
-
-static inline u32 shadow_l1e_get_flags(shadow_l1e_t sl1e)
-{ return l1e_get_flags(sl1e); }
-static inline u32 shadow_l2e_get_flags(shadow_l2e_t sl2e)
-{ return l2e_get_flags(sl2e); }
-#if SHADOW_PAGING_LEVELS >= 3
-static inline u32 shadow_l3e_get_flags(shadow_l3e_t sl3e)
-{ return l3e_get_flags(sl3e); }
-#if SHADOW_PAGING_LEVELS >= 4
-static inline u32 shadow_l4e_get_flags(shadow_l4e_t sl4e)
-{ return l4e_get_flags(sl4e); }
 #endif
 #endif
 
@@ -537,6 +524,10 @@ struct shadow_walk_t
  * them without needing to hold the shadow lock or walk the guest
  * pagetables.
  *
+ * For guest with hardware support for direct n/p fault injection, no
need
+ * to record a fast magic. Instead we steal this pattern for
out-of-sync
+ * shadow entry to avoid false injection by hardware.
+ *
  * This is only feasible for PAE and 64bit Xen: 32-bit non-PAE PTEs
don't
  * have reserved bits that we can use for this.
  */
@@ -545,6 +536,57 @@ static inline int sh_l1e_is_magic(shadow
 static inline int sh_l1e_is_magic(shadow_l1e_t sl1e)
 {
     return ((sl1e.l1 & SH_L1E_MAGIC) == SH_L1E_MAGIC);
+}
+
+/* Magic number can be put in all levels of out-of-sync shadow entries
*/
+static inline int sh_l2e_is_magic(shadow_l2e_t sl2e)
+{
+    return ((sl2e.l2 & SH_L1E_MAGIC) == SH_L1E_MAGIC);
+}
+
+static inline int sh_l3e_is_magic(shadow_l3e_t sl3e)
+{
+    return ((sl3e.l3 & SH_L1E_MAGIC) == SH_L1E_MAGIC);
+}
+
+#if SHADOW_PAGING_LEVELS >= 4
+static inline int sh_l4e_is_magic(shadow_l4e_t sl4e)
+{
+    return ((sl4e.l4 & SH_L1E_MAGIC) == SH_L1E_MAGIC);
+}
+#endif
+
+/* Special interface for out-of-sync entry, same as fast gnp magic */
+static inline shadow_l1e_t shadow_l1e_oos(struct vcpu *v) 
+{
+    return paging_domain_hw_np_injection(v->domain)
+           ? (l1_pgentry_t) { -1ULL } : shadow_l1e_empty();
+}
+
+static inline shadow_l2e_t shadow_l2e_oos(struct vcpu *v) 
+{
+    return paging_domain_hw_np_injection(v->domain)
+           ? (l2_pgentry_t) { -1ULL } : shadow_l2e_empty();
+}
+
+static inline shadow_l3e_t shadow_l3e_oos(struct vcpu *v) 
+{
+    return paging_domain_hw_np_injection(v->domain)
+           ? (l3_pgentry_t) { -1ULL } : shadow_l3e_empty();
+}
+
+#if SHADOW_PAGING_LEVELS >= 4
+static inline shadow_l4e_t shadow_l4e_oos(struct vcpu *v)
+{
+    return paging_domain_hw_np_injection(v->domain)
+           ? (l4_pgentry_t) { -1ULL } : shadow_l4e_empty();
+}
+#endif
+
+static inline int sh_l1e_is_oos(struct vcpu *v, shadow_l1e_t sl1e)
+{
+    return paging_domain_hw_np_injection(v->domain)
+           ? (sl1e.l1 == -1ULL) : 0;
 }
 
 /* Guest not present: a single magic value */
@@ -594,9 +636,39 @@ static inline u32 sh_l1e_mmio_get_flags(
 #define sh_l1e_gnp() shadow_l1e_empty()
 #define sh_l1e_mmio(_gfn, _flags) shadow_l1e_empty()
 #define sh_l1e_is_magic(_e) (0)
+#define sh_l2e_is_magic(_e) (0)
+#if SHADOW_PAGING_LEVELS >= 3
+#define sh_l3e_is_magic(_e) (0)
+#if SHADOW_PAGING_LEVELS >= 4
+#define sh_l4e_is_magic(_e) (0)
+#endif
+#endif
+
+#define shadow_l1e_oos(v) shadow_l1e_empty()
+#define shadow_l2e_oos(v) shadow_l2e_empty()
+#if SHADOW_PAGING_LEVELS >= 3
+#define shadow_l3e_oos(v) shadow_l3e_empty()
+#if SHADOW_PAGING_LEVELS >= 4
+#define shadow_l4e_oos(v) shadow_l4e_empty()
+#endif
+#endif
+#define sh_l1e_is_oos(v, _e) (0)
 
 #endif /* SHOPT_FAST_FAULT_PATH */
 
+/* Check magic number to return empty flag if true */
+static inline u32 shadow_l1e_get_flags(shadow_l1e_t sl1e)
+{ return sh_l1e_is_magic(sl1e) ? 0 : l1e_get_flags(sl1e); }
+static inline u32 shadow_l2e_get_flags(shadow_l2e_t sl2e)
+{ return sh_l2e_is_magic(sl2e) ? 0 : l2e_get_flags(sl2e); }
+#if SHADOW_PAGING_LEVELS >= 3
+static inline u32 shadow_l3e_get_flags(shadow_l3e_t sl3e)
+{ return sh_l3e_is_magic(sl3e) ? 0 : l3e_get_flags(sl3e); }
+#if SHADOW_PAGING_LEVELS >= 4
+static inline u32 shadow_l4e_get_flags(shadow_l4e_t sl4e)
+{ return sh_l4e_is_magic(sl4e) ? 0 : l4e_get_flags(sl4e); }
+#endif
+#endif
 
 #endif /* _XEN_SHADOW_TYPES_H */
 
diff -r ad0f20f5590a xen/include/asm-x86/domain.h
--- a/xen/include/asm-x86/domain.h      Fri Dec 28 15:44:51 2007 +0000
+++ b/xen/include/asm-x86/domain.h      Thu Jan 10 15:07:47 2008 +0800
@@ -176,6 +176,8 @@ struct paging_domain {
 struct paging_domain {
     /* flags to control paging operation */
     u32                     mode;
+    /* HVM guest: enable direct hardware n/p fault injection */
+    unsigned int hw_np_inject:1;
     /* extension for shadow paging support */
     struct shadow_domain    shadow;
     /* extension for hardware-assited paging */
diff -r ad0f20f5590a xen/include/asm-x86/paging.h
--- a/xen/include/asm-x86/paging.h      Fri Dec 28 15:44:51 2007 +0000
+++ b/xen/include/asm-x86/paging.h      Wed Jan 02 13:36:30 2008 +0800
@@ -65,6 +65,17 @@
 
 /* flags used for paging debug */
 #define PAGING_DEBUG_LOGDIRTY 0
+
+#if CONFIG_PAGING_LEVELS > 2
+/* This vcpu is enabled with direct guest n/p fault reflection without
+ * VM-Exit triggered. Under this mode, unsync-ed shadow entry needs be
+ * filled with some special pattern, to avoid unexpected guest n/p
fault
+ * injection due to shadow code.
+ */
+#define paging_domain_hw_np_injection(_d)
((_d)->arch.paging.hw_np_inject)
+#else
+#define paging_domain_hw_np_injection(_d) (0)
+#endif
 
 
/***********************************************************************
******
  * Mode-specific entry points into the shadow code.  

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>