|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v5 2/7] x86/paging: add TLB flush hooks
On Fri, Feb 28, 2020 at 03:50:31PM +0100, Jan Beulich wrote:
> On 19.02.2020 18:43, Roger Pau Monne wrote:
> > Add shadow and hap implementation specific helpers to perform guest
> > TLB flushes. Note that the code for both is exactly the same at the
> > moment, and is copied from hvm_flush_vcpu_tlb. This will be changed by
> > further patches that will add implementation specific optimizations to
> > them.
> >
> > No functional change intended.
> >
> > Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> > Reviewed-by: Wei Liu <wl@xxxxxxx>
> > Acked-by: Tim Deegan <tim@xxxxxxx>
>
> This looks good in principle, with one possible anomaly:
>
> > --- a/xen/arch/x86/hvm/hvm.c
> > +++ b/xen/arch/x86/hvm/hvm.c
> > @@ -3990,55 +3990,10 @@ static void hvm_s3_resume(struct domain *d)
> > bool hvm_flush_vcpu_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v),
> > void *ctxt)
> > {
> > - static DEFINE_PER_CPU(cpumask_t, flush_cpumask);
> > - cpumask_t *mask = &this_cpu(flush_cpumask);
> > - struct domain *d = current->domain;
> > - struct vcpu *v;
> > -
> > - /* Avoid deadlock if more than one vcpu tries this at the same time. */
> > - if ( !spin_trylock(&d->hypercall_deadlock_mutex) )
> > - return false;
> > -
> > - /* Pause all other vcpus. */
> > - for_each_vcpu ( d, v )
> > - if ( v != current && flush_vcpu(ctxt, v) )
> > - vcpu_pause_nosync(v);
> > -
> > - /* Now that all VCPUs are signalled to deschedule, we wait... */
> > - for_each_vcpu ( d, v )
> > - if ( v != current && flush_vcpu(ctxt, v) )
> > - while ( !vcpu_runnable(v) && v->is_running )
> > - cpu_relax();
> > -
> > - /* All other vcpus are paused, safe to unlock now. */
> > - spin_unlock(&d->hypercall_deadlock_mutex);
> > -
> > - cpumask_clear(mask);
> > -
> > - /* Flush paging-mode soft state (e.g., va->gfn cache; PAE PDPE cache).
> > */
> > - for_each_vcpu ( d, v )
> > - {
> > - unsigned int cpu;
> > -
> > - if ( !flush_vcpu(ctxt, v) )
> > - continue;
> > -
> > - paging_update_cr3(v, false);
> > + struct domain *currd = current->domain;
> >
> > - cpu = read_atomic(&v->dirty_cpu);
> > - if ( is_vcpu_dirty_cpu(cpu) )
> > - __cpumask_set_cpu(cpu, mask);
> > - }
> > -
> > - /* Flush TLBs on all CPUs with dirty vcpu state. */
> > - flush_tlb_mask(mask);
> > -
> > - /* Done. */
> > - for_each_vcpu ( d, v )
> > - if ( v != current && flush_vcpu(ctxt, v) )
> > - vcpu_unpause(v);
> > -
> > - return true;
> > + return shadow_mode_enabled(currd) ? shadow_flush_tlb(flush_vcpu, ctxt)
> > + : hap_flush_tlb(flush_vcpu, ctxt);
> > }
>
> Following our current model I think this should be a new pointer
> in struct paging_mode (then truly fitting "hooks" in the title).
I tried doing it that way, but there was something weird about it, the
paging mode is per-vcpu, and hence I needed to do something like:
paging_get_hostmode(current)->flush(current->domain, ...)
I can try to move it to being a paging_mode hook if you prefer.
> I can see the desire to avoid the indirect call though, but I
> also think that if we were to go that route, we should settle on
> switching around others as well which are paging mode dependent.
> (FAOD this is nothing I ask you to do here.) Andrew, thoughts?
I think it's already quite of a mixed bag, see track_dirty_vram for
example which uses a similar model.
Thanks, Roger.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |