WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-changelog

[Xen-changelog] [xen-unstable] x86: rmb() can be weakened according to n

To: xen-changelog@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-changelog] [xen-unstable] x86: rmb() can be weakened according to new Intel spec.
From: Xen patchbot-unstable <patchbot-unstable@xxxxxxxxxxxxxxxxxxx>
Date: Thu, 22 Nov 2007 12:00:46 -0800
Delivery-date: Thu, 22 Nov 2007 12:03:37 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-changelog-request@lists.xensource.com?subject=help>
List-id: BK change log <xen-changelog.lists.xensource.com>
List-post: <mailto:xen-changelog@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-changelog>, <mailto:xen-changelog-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-changelog>, <mailto:xen-changelog-request@lists.xensource.com?subject=unsubscribe>
Reply-to: xen-devel@xxxxxxxxxxxxxxxxxxx
Sender: xen-changelog-bounces@xxxxxxxxxxxxxxxxxxx
# HG changeset patch
# User Keir Fraser <keir.fraser@xxxxxxxxxx>
# Date 1195655767 0
# Node ID 05cbf512b82b2665d407395bac73b9cca0c396b4
# Parent  7ccf7d373d0e98014525eeaed8c0bf3623646ae8
x86: rmb() can be weakened according to new Intel spec.

Both Intel and AMD agree that, from a programmer's viewpoint:
 Loads cannot be reordered relative to other loads.
 Stores cannot be reordered relative to other stores.

Intel64 Architecture Memory Ordering White Paper
<http://developer.intel.com/products/processor/manuals/318147.pdf>

AMD64 Architecture Programmer's Manual, Volume 2: System Programming
<http://www.amd.com/us-en/assets/content_type/\
 white_papers_and_tech_docs/24593.pdf>

Signed-off-by: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
---
 xen/include/asm-x86/system.h        |   15 +++++++++++++++
 xen/include/asm-x86/x86_32/system.h |    5 ++---
 xen/include/asm-x86/x86_64/system.h |    5 ++---
 3 files changed, 19 insertions(+), 6 deletions(-)

diff -r 7ccf7d373d0e -r 05cbf512b82b xen/include/asm-x86/system.h
--- a/xen/include/asm-x86/system.h      Wed Nov 21 14:27:38 2007 +0000
+++ b/xen/include/asm-x86/system.h      Wed Nov 21 14:36:07 2007 +0000
@@ -135,6 +135,21 @@ static always_inline unsigned long __cmp
 
 #define __HAVE_ARCH_CMPXCHG
 
+/*
+ * Both Intel and AMD agree that, from a programmer's viewpoint:
+ *  Loads cannot be reordered relative to other loads.
+ *  Stores cannot be reordered relative to other stores.
+ * 
+ * Intel64 Architecture Memory Ordering White Paper
+ * <http://developer.intel.com/products/processor/manuals/318147.pdf>
+ * 
+ * AMD64 Architecture Programmer's Manual, Volume 2: System Programming
+ * <http://www.amd.com/us-en/assets/content_type/\
+ *  white_papers_and_tech_docs/24593.pdf>
+ */
+#define rmb()           barrier()
+#define wmb()           barrier()
+
 #ifdef CONFIG_SMP
 #define smp_mb()        mb()
 #define smp_rmb()       rmb()
diff -r 7ccf7d373d0e -r 05cbf512b82b xen/include/asm-x86/x86_32/system.h
--- a/xen/include/asm-x86/x86_32/system.h       Wed Nov 21 14:27:38 2007 +0000
+++ b/xen/include/asm-x86/x86_32/system.h       Wed Nov 21 14:36:07 2007 +0000
@@ -98,9 +98,8 @@ static inline void atomic_write64(uint64
         w = x;
 }
 
-#define mb()    asm volatile ( "lock; addl $0,0(%%esp)" : : : "memory" )
-#define rmb()   asm volatile ( "lock; addl $0,0(%%esp)" : : : "memory" )
-#define wmb()   asm volatile ( "" : : : "memory" )
+#define mb()                    \
+    asm volatile ( "lock; addl $0,0(%%esp)" : : : "memory" )
 
 #define __save_flags(x)         \
     asm volatile ( "pushfl ; popl %0" : "=g" (x) : )
diff -r 7ccf7d373d0e -r 05cbf512b82b xen/include/asm-x86/x86_64/system.h
--- a/xen/include/asm-x86/x86_64/system.h       Wed Nov 21 14:27:38 2007 +0000
+++ b/xen/include/asm-x86/x86_64/system.h       Wed Nov 21 14:36:07 2007 +0000
@@ -52,9 +52,8 @@ static inline void atomic_write64(uint64
     *p = v;
 }
 
-#define mb()    asm volatile ( "mfence" : : : "memory" )
-#define rmb()   asm volatile ( "lfence" : : : "memory" )
-#define wmb()   asm volatile ( "" : : : "memory" )
+#define mb()                    \
+    asm volatile ( "mfence" : : : "memory" )
 
 #define __save_flags(x)         \
     asm volatile ( "pushfq ; popq %q0" : "=g" (x) : :"memory" )

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog

<Prev in Thread] Current Thread [Next in Thread>
  • [Xen-changelog] [xen-unstable] x86: rmb() can be weakened according to new Intel spec., Xen patchbot-unstable <=