[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3] x86/hvm/viridian: flush remote tlbs by hypercall



> -----Original Message-----
> From: Paul Durrant
> Sent: 20 November 2015 13:41
> To: 'Jan Beulich'
> Cc: Andrew Cooper; Ian Campbell; Ian Jackson; Stefano Stabellini; Wei Liu;
> xen-devel@xxxxxxxxxxxxxxxxxxxx; Keir (Xen.org)
> Subject: RE: [PATCH v3] x86/hvm/viridian: flush remote tlbs by hypercall
> 
> > -----Original Message-----
> > From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> > Sent: 20 November 2015 12:27
> > To: Paul Durrant
> > Cc: Andrew Cooper; Ian Campbell; Ian Jackson; Stefano Stabellini; Wei Liu;
> > xen-devel@xxxxxxxxxxxxxxxxxxxx; Keir (Xen.org)
> > Subject: RE: [PATCH v3] x86/hvm/viridian: flush remote tlbs by hypercall
> >
> > >>> On 20.11.15 at 10:15, <Paul.Durrant@xxxxxxxxxx> wrote:
> > >> From: Andrew Cooper [mailto:andrew.cooper3@xxxxxxxxxx]
> > >> Sent: 19 November 2015 17:09
> > >> On 19/11/15 16:57, Paul Durrant wrote:
> > >> >> From: Andrew Cooper [mailto:andrew.cooper3@xxxxxxxxxx]
> > >> >> Sent: 19 November 2015 16:07
> > >> >> On 19/11/15 13:19, Paul Durrant wrote:
> > >> >>> +        /*
> > >> >>> +         * Since ASIDs have now been flushed it just remains to
> > >> >>> +         * force any CPUs currently running target vCPUs out of non-
> > >> >>> +         * root mode. It's possible that re-scheduling has taken 
> > >> >>> place
> > >> >>> +         * so we may unnecessarily IPI some CPUs.
> > >> >>> +         */
> > >> >>> +        if ( !cpumask_empty(pcpu_mask) )
> > >> >>> +            flush_tlb_mask(pcpu_mask);
> > >> >> Wouldn't it be easier to simply and input_params.vcpu_mask with
> > >> >> d->vcpu_dirty_mask ?
> > >> >>
> > >
> > > Actually I realise your original statement makes no sense anyway. There
> is
> > > no such mask as d->vcpu_dirty_mask.There is d->domain_dirty_cpumask
> > which is a
> > > mask of *physical* CPUs, but since (as the name implies) input_params.
> > > vcpu_mask is a mask of *virtual* CPUs then ANDing the two together
> > would just
> > > yield garbage.
> > >
> > >> > No, that may yield much too big a mask. All we need here is a mask of
> > >> where the vcpus are running *now*, not everywhere they've been.
> > >>
> > >> The dirty mask is a "currently scheduled on" mask.
> > >
> > > No it's not. The comment in sched.h clearly states that
> > domain_dirty_cpumask
> > > is a "Bitmask of CPUs which are holding onto this domain's state" which 
> > > is,
> > > as I said before, essentially everywhere the domains vcpus have been
> > > scheduled since the last time state was flushed. Since, in this case, I 
> > > have
> > > already invalidated ASIDs for all targeted virtual CPUs I don't need to 
> > > IPI
> > > that many physical CPUs, I only need the mask of where the virtual CPUs
> > are
> > > *currently* running. If one of the them gets descheduled before the IPI
> > then
> > > the IPI was unnecessary (but there is no low-cost way of determining or
> > > preventing that).
> >
> > While you can't "and" that mask into input_params.vcpu_mask,
> > wouldn't using it allow you to avoid the scratch pCPU mask
> > variable?
> >
> 
> I'm not sure. After flushing the ASIDs I guess I could start with the
> domain_dirty_cpumask and, remove non-running vcpus from it, and then
> use it as the flush mask. If I do that, I suppose I ought to reset the
> vcpu_dirty_vcpumask values too. Something like...
> 
>         for_each_vcpu ( currd, v )
>         {
>             if ( v->vcpu_id >= (sizeof(input_params.vcpu_mask) * 8) )
>                 break;
> 
>             if ( !((input_params.vcpu_mask >> v->vcpu_id) & 1) )
>                 continue;
> 
>             hvm_asid_flush_vcpu(v);
>             cpumask_clear(v->vcpu_dirty_vcpumask);
> 
>             if ( v->is_running )
>                 cpumask_set_cpu(v->processor, v->vcpu_dirty_vcpumask);
>             else
>                 cpumask_clear_cpu(v->processor, d->domain_dirty_vcpumask);
>         }
> 
>         if ( !cpumask_empty(d->domain_dirty_vcpumask) )
>             flush_tlb_mask(d->domain_dirty_vcpumask);
> 

s/vcpumask/cpumask in the above. You hopefully got what I meant though.

> Maybe?
> 
>   Paul
> 
> > Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.