WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-changelog

[Xen-changelog] [xen-unstable] x86 shadow: Fix lock-less race between re

To: xen-changelog@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-changelog] [xen-unstable] x86 shadow: Fix lock-less race between resync and fast path.
From: Xen patchbot-unstable <patchbot-unstable@xxxxxxxxxxxxxxxxxxx>
Date: Mon, 06 Jul 2009 05:45:37 -0700
Delivery-date: Mon, 06 Jul 2009 05:46:11 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-changelog-request@lists.xensource.com?subject=help>
List-id: BK change log <xen-changelog.lists.xensource.com>
List-post: <mailto:xen-changelog@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-changelog>, <mailto:xen-changelog-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-changelog>, <mailto:xen-changelog-request@lists.xensource.com?subject=unsubscribe>
Reply-to: xen-devel@xxxxxxxxxxxxxxxxxxx
Sender: xen-changelog-bounces@xxxxxxxxxxxxxxxxxxx
# HG changeset patch
# User Keir Fraser <keir.fraser@xxxxxxxxxx>
# Date 1246877396 -3600
# Node ID 3a5d8601293c95f7d783fef3a5f8282fd68603f2
# Parent  d33a665b2c05b0785637330074c285034571faf1
x86 shadow: Fix lock-less race between resync and fast path.

Signed-off-by: Gianluca Guida <gianluca.guida@xxxxxxxxxxxxx>
---
 xen/arch/x86/mm/shadow/multi.c |   48 ++++++++++++++++++++---------------------
 1 files changed, 24 insertions(+), 24 deletions(-)

diff -r d33a665b2c05 -r 3a5d8601293c xen/arch/x86/mm/shadow/multi.c
--- a/xen/arch/x86/mm/shadow/multi.c    Mon Jul 06 11:48:44 2009 +0100
+++ b/xen/arch/x86/mm/shadow/multi.c    Mon Jul 06 11:49:56 2009 +0100
@@ -2975,6 +2975,30 @@ static int sh_page_fault(struct vcpu *v,
 #if (SHADOW_OPTIMIZATIONS & SHOPT_FAST_FAULT_PATH)
     if ( (regs->error_code & PFEC_reserved_bit) )
     {
+#if (SHADOW_OPTIMIZATIONS & SHOPT_OUT_OF_SYNC) 
+        /* First, need to check that this isn't an out-of-sync
+         * shadow l1e.  If it is, we fall back to the slow path, which
+         * will sync it up again. */
+        {
+            shadow_l2e_t sl2e;
+            mfn_t gl1mfn;
+            if ( (__copy_from_user(&sl2e,
+                                   (sh_linear_l2_table(v)
+                                    + shadow_l2_linear_offset(va)),
+                                   sizeof(sl2e)) != 0)
+                 || !(shadow_l2e_get_flags(sl2e) & _PAGE_PRESENT)
+                 || !mfn_valid(gl1mfn = _mfn(mfn_to_page(
+                                  shadow_l2e_get_mfn(sl2e))->v.sh.back))
+                 || unlikely(mfn_is_out_of_sync(gl1mfn)) )
+            {
+                /* Hit the slow path as if there had been no 
+                 * shadow entry at all, and let it tidy up */
+                ASSERT(regs->error_code & PFEC_page_present);
+                regs->error_code ^= (PFEC_reserved_bit|PFEC_page_present);
+                goto page_fault_slow_path;
+            }
+        }
+#endif /* SHOPT_OUT_OF_SYNC */
         /* The only reasons for reserved bits to be set in shadow entries 
          * are the two "magic" shadow_l1e entries. */
         if ( likely((__copy_from_user(&sl1e, 
@@ -2983,30 +3007,6 @@ static int sh_page_fault(struct vcpu *v,
                                       sizeof(sl1e)) == 0)
                     && sh_l1e_is_magic(sl1e)) )
         {
-#if (SHADOW_OPTIMIZATIONS & SHOPT_OUT_OF_SYNC) 
-             /* First, need to check that this isn't an out-of-sync
-              * shadow l1e.  If it is, we fall back to the slow path, which
-              * will sync it up again. */
-            {
-                shadow_l2e_t sl2e;
-                mfn_t gl1mfn;
-               if ( (__copy_from_user(&sl2e,
-                                       (sh_linear_l2_table(v)
-                                        + shadow_l2_linear_offset(va)),
-                                       sizeof(sl2e)) != 0)
-                     || !(shadow_l2e_get_flags(sl2e) & _PAGE_PRESENT)
-                     || !mfn_valid(gl1mfn = _mfn(mfn_to_page(
-                                      shadow_l2e_get_mfn(sl2e))->v.sh.back))
-                     || unlikely(mfn_is_out_of_sync(gl1mfn)) )
-               {
-                   /* Hit the slow path as if there had been no 
-                    * shadow entry at all, and let it tidy up */
-                   ASSERT(regs->error_code & PFEC_page_present);
-                   regs->error_code ^= (PFEC_reserved_bit|PFEC_page_present);
-                   goto page_fault_slow_path;
-               }
-            }
-#endif /* SHOPT_OUT_OF_SYNC */
 
             if ( sh_l1e_is_gnp(sl1e) )
             {

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog

<Prev in Thread] Current Thread [Next in Thread>
  • [Xen-changelog] [xen-unstable] x86 shadow: Fix lock-less race between resync and fast path., Xen patchbot-unstable <=