[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 7/7] x86/tlb: use Xen L0 assisted TLB flush when available
On Mon, Jan 27, 2020 at 07:11:15PM +0100, Roger Pau Monne wrote: > Use Xen's L0 HVMOP_flush_tlbs hypercall in order to perform flushes. > This greatly increases the performance of TLB flushes when running > with a high amount of vCPUs as a Xen guest, and is specially important > when running in shim mode. > > The following figures are from a PV guest running `make -j32 xen` in > shim mode with 32 vCPUs and HAP. > > Using x2APIC and ALLBUT shorthand: > real 4m35.973s > user 4m35.110s > sys 36m24.117s > > Using L0 assisted flush: > real 1m2.596s > user 4m34.818s > sys 5m16.374s > > The implementation adds a new hook to hypervisor_ops so other > enlightenments can also implement such assisted flush just by filling > the hook. Note that the Xen implementation completely ignores the > dirty CPU mask and the linear address passed in, and always performs a > global TLB flush on all vCPUs. > > Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx> > --- > Changes since v1: > - Add a L0 assisted hook to hypervisor ops. > --- > xen/arch/x86/guest/hypervisor.c | 10 ++++++++++ > xen/arch/x86/guest/xen/xen.c | 6 ++++++ > xen/arch/x86/smp.c | 11 +++++++++++ > xen/include/asm-x86/guest/hypervisor.h | 17 +++++++++++++++++ > 4 files changed, 44 insertions(+) > > diff --git a/xen/arch/x86/guest/hypervisor.c b/xen/arch/x86/guest/hypervisor.c > index 4f27b98740..4085b19734 100644 > --- a/xen/arch/x86/guest/hypervisor.c > +++ b/xen/arch/x86/guest/hypervisor.c > @@ -18,6 +18,7 @@ > * > * Copyright (c) 2019 Microsoft. > */ > +#include <xen/cpumask.h> > #include <xen/init.h> > #include <xen/types.h> > > @@ -64,6 +65,15 @@ void hypervisor_resume(void) > ops->resume(); > } > > +int hypervisor_flush_tlb(const cpumask_t *mask, const void *va, > + unsigned int order) > +{ > + if ( ops && ops->flush_tlb ) > + return ops->flush_tlb(mask, va, order); > + Is there a way to make this an alternative call? I consider tlb flush a frequent operation which can use some optimisation. This can be done as a later improvement if it is too difficult though. This patch already has some substantial improvement. > + return -ENOSYS; > +} > + > /* > * Local variables: > * mode: C > diff --git a/xen/arch/x86/guest/xen/xen.c b/xen/arch/x86/guest/xen/xen.c > index 6dbc5f953f..639a2a4b32 100644 > --- a/xen/arch/x86/guest/xen/xen.c > +++ b/xen/arch/x86/guest/xen/xen.c > @@ -310,11 +310,17 @@ static void resume(void) > pv_console_init(); > } > > +static int flush_tlb(const cpumask_t *mask, const void *va, unsigned int > order) > +{ > + return xen_hypercall_hvm_op(HVMOP_flush_tlbs, NULL); > +} > + > static const struct hypervisor_ops ops = { > .name = "Xen", > .setup = setup, > .ap_setup = ap_setup, > .resume = resume, > + .flush_tlb = flush_tlb, > }; > > const struct hypervisor_ops *__init xg_probe(void) > diff --git a/xen/arch/x86/smp.c b/xen/arch/x86/smp.c > index 65eb7cbda8..9bc925616a 100644 > --- a/xen/arch/x86/smp.c > +++ b/xen/arch/x86/smp.c > @@ -15,6 +15,7 @@ > #include <xen/perfc.h> > #include <xen/spinlock.h> > #include <asm/current.h> > +#include <asm/guest.h> > #include <asm/smp.h> > #include <asm/mc146818rtc.h> > #include <asm/flushtlb.h> > @@ -256,6 +257,16 @@ void flush_area_mask(const cpumask_t *mask, const void > *va, unsigned int flags) > if ( (flags & ~FLUSH_ORDER_MASK) && > !cpumask_subset(mask, cpumask_of(cpu)) ) > { > + if ( cpu_has_hypervisor && > + !(flags & ~(FLUSH_TLB | FLUSH_TLB_GLOBAL | FLUSH_VA_VALID | > + FLUSH_ORDER_MASK)) && > + !hypervisor_flush_tlb(mask, va, flags & FLUSH_ORDER_MASK) ) > + { > + if ( tlb_clk_enabled ) > + tlb_clk_enabled = false; > + return; > + } > + Per my understanding, not turning tlb_clk_enabled back to true after an assisted flush fails is okay, because the effect of tlb_clk_enabled being false is to always make NEED_FLUSH return true. That causes excessive flushing, but it is okay in terms of correctness. Do I understand it correctly? Wei. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |