[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v6 6/6] x86/tlb: use Xen L0 assisted TLB flush when available



On 03.03.2020 18:20, Roger Pau Monne wrote:
> Use Xen's L0 HVMOP_flush_tlbs hypercall in order to perform flushes.
> This greatly increases the performance of TLB flushes when running
> with a high amount of vCPUs as a Xen guest, and is specially important
> when running in shim mode.
> 
> The following figures are from a PV guest running `make -j32 xen` in
> shim mode with 32 vCPUs and HAP.
> 
> Using x2APIC and ALLBUT shorthand:
> real  4m35.973s
> user  4m35.110s
> sys   36m24.117s
> 
> Using L0 assisted flush:
> real    1m2.596s
> user    4m34.818s
> sys     5m16.374s
> 
> The implementation adds a new hook to hypervisor_ops so other
> enlightenments can also implement such assisted flush just by filling
> the hook.
> 
> Note that the Xen implementation completely ignores the dirty CPU mask
> and the linear address passed in, and always performs a global TLB
> flush on all vCPUs. This is a limitation of the hypercall provided by
> Xen. Also note that local TLB flushes are not performed using the
> assisted TLB flush, only remote ones.

As to this last sentence - isn't this wasteful at least when a
full address space flush is being processed anyway?

> Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> Reviewed-by: Wei Liu <wl@xxxxxxx>

Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.