|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 03/10] xen: arm: reduce instruction cache and tlb flushes to inner-shareable.
At 17:10 +0100 on 28 Jun (1372439449), Ian Campbell wrote:
> Now that Xen maps memory and performs pagetable walks as inner shareable we
> don't need to push updates down so far when modifying page tables etc.
>
> Signed-off-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
> --- a/xen/include/asm-arm/arm32/page.h
> +++ b/xen/include/asm-arm/arm32/page.h
> @@ -39,8 +39,8 @@ static inline void flush_xen_text_tlb(void)
> asm volatile (
> "isb;" /* Ensure synchronization with
> previous changes to text */
> STORE_CP32(0, TLBIALLH) /* Flush hypervisor TLB */
> - STORE_CP32(0, ICIALLU) /* Flush I-cache */
> - STORE_CP32(0, BPIALL) /* Flush branch predictor */
> + STORE_CP32(0, ICIALLUIS) /* Flush I-cache */
> + STORE_CP32(0, BPIALLIS) /* Flush branch predictor */
> "dsb;" /* Ensure completion of TLB+BP flush */
> "isb;"
> : : "r" (r0) /*dummy*/ : "memory");
> @@ -54,7 +54,7 @@ static inline void flush_xen_data_tlb(void)
> {
> register unsigned long r0 asm ("r0");
> asm volatile("dsb;" /* Ensure preceding are visible */
> - STORE_CP32(0, TLBIALLH)
> + STORE_CP32(0, TLBIALLHIS)
> "dsb;" /* Ensure completion of the TLB flush */
> "isb;"
> : : "r" (r0) /* dummy */: "memory");
> @@ -69,7 +69,7 @@ static inline void flush_xen_data_tlb_range_va(unsigned
> long va, unsigned long s
> unsigned long end = va + size;
> dsb(); /* Ensure preceding are visible */
> while ( va < end ) {
> - asm volatile(STORE_CP32(0, TLBIMVAH)
> + asm volatile(STORE_CP32(0, TLBIMVAHIS)
> : : "r" (va) : "memory");
> va += PAGE_SIZE;
> }
That's OK for actual Xen data mappings, map_domain_page() &c., but now
set_fixmap() and clear_fixmap() need to use a stronger flush whenever
they map device memory. The same goes for create_xen_entries() when
ai != WRITEALLOC.
> diff --git a/xen/include/asm-arm/arm64/flushtlb.h
> b/xen/include/asm-arm/arm64/flushtlb.h
> index d0535a0..3a6d2cb 100644
> --- a/xen/include/asm-arm/arm64/flushtlb.h
> +++ b/xen/include/asm-arm/arm64/flushtlb.h
> @@ -6,7 +6,7 @@ static inline void flush_tlb_local(void)
> {
> asm volatile(
> "dsb sy;"
> - "tlbi vmalle1;"
> + "tlbi vmalle1is;"
> "dsb sy;"
> "isb;"
> : : : "memory");
> @@ -17,7 +17,7 @@ static inline void flush_tlb_all_local(void)
> {
> asm volatile(
> "dsb sy;"
> - "tlbi alle1;"
> + "tlbi alle1is;"
> "dsb sy;"
> "isb;"
> : : : "memory");
Might these need to be stronger if we're using them on context switch
and guests have MMIO/outer-shareable mappings?
Tim.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |