[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 13/45] xen: arm64: dcache flush



On Thu, 2013-02-07 at 16:33 +0000, Ian Campbell wrote:
> On Thu, 2013-02-07 at 16:30 +0000, Tim Deegan wrote:
> > At 16:22 +0000 on 07 Feb (1360254170), Ian Campbell wrote:
> > > On Thu, 2013-02-07 at 16:14 +0000, Tim Deegan wrote:
> > > > At 15:56 +0000 on 23 Jan (1358956579), Ian Campbell wrote:
> > > > > This drops the
> > > > >     "m" (*_p)
> > > > > asm contraint. I can't figure out what this was for.
> > > > > 
> > > > 
> > > > It's to stop the compiler hoisting writes to the wrong side of the
> > > > flush.  It's the lighter-weight equivalent to the "memory" clobber in
> > > > the range version.  By moving from inline "dsb;" to dsb() you've
> > > > reintroduced the full memory clobber here anyway. :(
> > > 
> > > Ah, I forgot about the "memory" on the inline asm of the dsb macro.
> > > 
> > > How about #define _dsb() __asm__ __volatile__ ("dsb" : : :)
> > > for cases such as this?
> > 
> > Maybe.  The profusion of undersocre-prefixed operations is staring to
> > confuse me, though.
> 
> Ack.
> 
> > > I think on 32-bit "dsb" == "dsb sy", while on 64-bit only the "dsb sy"
> > > form is valid. The reason I didn't err on the side of just adding the
> > > "sy" was that often there is something else which differs between the
> > > two in the same asm inline block and so using the macro throughout
> > > seemed cleaner. Perhaps I should rethink this.
> > 
> > I see.  So maybe we could just switch to using 'dsb sy' in
> > flush_xen_dcache() and just macro up the one line of inline asm that
> > expands to STORE_CP32(0, DCCMVAC) ?
> 
> Actually, yes, I subsequently to make this change invented
> READ/WRITE_SYSREG to abstract this stuff away -- I should extend that to
> inline ASM too.

Actually, sysreg is not a useful concept here because cache flushes
under arm64 are actual instructions (at least mnemonically in the
assembler). So "STORE_CP32(0, DCCMVAC)" on 32-bit becomes "dc cvac, %0;"
on 64-bit.

I came up with the below (which I think is basically what you suggested)

Ian.

8<---------------------------------

>From 7ea401f56956261ac6af22d70f61b2c059280a4e Mon Sep 17 00:00:00 2001
From: Ian Campbell <ian.campbell@xxxxxxxxxx>
Date: Mon, 10 Dec 2012 16:08:59 +0000
Subject: [PATCH] xen: arm64: dcache flush

Use "dsb sy" instead of bare "dsb", they mean the same on 32-bit but only the
former is valid on 64-bit.

Abstract the actual flush operation into a macro.

Signed-off-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
---
v2: revert to inline asm
---
 xen/include/asm-arm/arm32/page.h |    3 +++
 xen/include/asm-arm/arm64/page.h |    3 +++
 xen/include/asm-arm/page.h       |    8 ++++----
 3 files changed, 10 insertions(+), 4 deletions(-)

diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h
index 96b5798..45d25ee 100644
--- a/xen/include/asm-arm/arm32/page.h
+++ b/xen/include/asm-arm/arm32/page.h
@@ -23,6 +23,9 @@ static inline void write_pte(lpae_t *p, lpae_t pte)
         : : "r" (pte.bits), "r" (p) : "memory");
 }
 
+/* Inline ASM to flush dcache on register R (may be an inline asm operand) */
+#define __flush_xen_dcache_one(R) STORE_CP32(R, DCCMVAC)
+
 /*
  * Flush all hypervisor mappings from the TLB and branch predictor.
  * This is needed after changing Xen code mappings.
diff --git a/xen/include/asm-arm/arm64/page.h b/xen/include/asm-arm/arm64/page.h
index ce5be63..1d7c70e 100644
--- a/xen/include/asm-arm/arm64/page.h
+++ b/xen/include/asm-arm/arm64/page.h
@@ -18,6 +18,9 @@ static inline void write_pte(lpae_t *p, lpae_t pte)
         : : "r" (pte.bits), "r" (p) : "memory");
 }
 
+/* Inline ASM to flush dcache on register R (may be an inline asm operand) */
+#define __flush_xen_dcache_one(R) "dc cvac, %" #R ";"
+
 /*
  * Flush all hypervisor mappings from the TLB
  * This is needed after changing Xen code mappings.
diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index 4e245a9..b89238b 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -251,7 +251,7 @@ static inline void flush_xen_dcache_va_range(void *p, 
unsigned long size)
     void *end;
     dsb();           /* So the CPU issues all writes to the range */
     for ( end = p + size; p < end; p += cacheline_bytes )
-        WRITE_CP32((uint32_t) p, DCCMVAC);
+        asm volatile (__flush_xen_dcache_one(0) : : "r" (p));
     dsb();           /* So we know the flushes happen before continuing */
 }
 
@@ -264,9 +264,9 @@ static inline void flush_xen_dcache_va_range(void *p, 
unsigned long size)
         flush_xen_dcache_va_range(_p, sizeof(x));                       \
     else                                                                \
         asm volatile (                                                  \
-            "dsb;"   /* Finish all earlier writes */                    \
-            STORE_CP32(0, DCCMVAC)                                      \
-            "dsb;"   /* Finish flush before continuing */               \
+            "dsb sy;"   /* Finish all earlier writes */                 \
+            __flush_xen_dcache_one(0)                                   \
+            "dsb sy;"   /* Finish flush before continuing */            \
             : : "r" (_p), "m" (*_p));                                   \
 } while (0)
 
-- 
1.7.2.5




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.