[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH RFC 11/44] x86/pt-shadow: Always set _PAGE_ACCESSED on L4e updates



Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
---
 xen/arch/x86/pv/mm.h | 19 ++++++++++++++++---
 1 file changed, 16 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/pv/mm.h b/xen/arch/x86/pv/mm.h
index 7502d53..a10b09a 100644
--- a/xen/arch/x86/pv/mm.h
+++ b/xen/arch/x86/pv/mm.h
@@ -144,9 +144,22 @@ static inline l3_pgentry_t unadjust_guest_l3e(l3_pgentry_t 
l3e,
 static inline l4_pgentry_t adjust_guest_l4e(l4_pgentry_t l4e,
                                             const struct domain *d)
 {
-    if ( likely(l4e_get_flags(l4e) & _PAGE_PRESENT) &&
-         likely(!is_pv_32bit_domain(d)) )
-        l4e_add_flags(l4e, _PAGE_USER);
+    /*
+     * When shadowing an L4 for per-pcpu purposes, we cannot efficiently sync
+     * access bit updates from hardware (on the shadow tables) back into the
+     * guest view.  We therefore always set _PAGE_ACCESSED even in the guests
+     * view.
+     *
+     * This will appear to the guest as a CPU which proactively pulls all
+     * valid L4e's into its TLB, which is compatible with the x86 ABI.
+     *
+     * Furthermore, at the time of writing, all PV guests I can locate choose
+     * to set the access bit anyway, so this is no actual change in their
+     * behaviour.
+     */
+    if ( likely(l4e_get_flags(l4e) & _PAGE_PRESENT) )
+        l4e_add_flags(l4e, (_PAGE_ACCESSED |
+                            (is_pv_32bit_domain(d) ? 0 : _PAGE_USER)));
 
     return l4e;
 }
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.