[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86: don't use VA for cache flush when also flushing TLB



On 26/05/2014 11:17, Jan Beulich wrote:
Doing both flushes at once is a strong indication for the address
mapping to either having got dropped (in which case the cache flush,
when done via INVLPG, would fault) or its physical address having
changed (in which case the cache flush would end up being done on the
wrong address range). There is no adverse effect (other than the
obvious performance one) using WBINVD in this case regardless of the
range's size; only map_pages_to_xen() uses combined flushes at present.

This problem was observed with the 2nd try backport of d6cb14b3 ("VT-d:
suppress UR signaling for desktop chipsets") to 4.2 (where ioremap()
needs to be replaced with set_fixmap_nocache(); the now commented out
__set_fixmap(, 0, 0) there to undo the mapping resulted in the first of
the above two scenarios).

Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>

Overall, I agree with your final assessment, and it might indeed be the easiest way, so on that basis, Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>

It would be nice to avoid the performance hit if possible.

The fault case can be fixed by reording the cache flush with respect to the tlb flush inside flush_area_local.  For callers performing a TLB shootdown and a cache flush, it is far more likely that the VAs wanting flushing are under the old TLB mappings rather than the new.

This does come with a risk that the old TLB entry has already fallen out of the TLB before the flush happens, at which point we are back into the risk of a fault or flushing the wrong physical area.

The only way I can see to safely flush bot the caches and TLB for a given change is to flush the cache by VA, then update the PTE, then shoot down the TLB entry.  Despite this being awkward, I think it is preferable to falling back to wbinvd() to cover up bad behaviour in the callers.

~Andrew


--- a/xen/arch/x86/flushtlb.c
+++ b/xen/arch/x86/flushtlb.c
@@ -152,7 +152,8 @@ void flush_area_local(const void *va, un
         if ( order < (BITS_PER_LONG - PAGE_SHIFT) )
             sz = 1UL << (order + PAGE_SHIFT);
 
-        if ( c->x86_clflush_size && c->x86_cache_size && sz &&
+        if ( !(flags & (FLUSH_TLB|FLUSH_TLB_GLOBAL)) &&
+             c->x86_clflush_size && c->x86_cache_size && sz &&
              ((sz >> 10) < c->x86_cache_size) )
         {
             va = (const void *)((unsigned long)va & ~(sz - 1));





_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.