[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3] x86/hvm/viridian: flush remote tlbs by hypercall



>>> On 20.11.15 at 10:15, <Paul.Durrant@xxxxxxxxxx> wrote:
>> From: Andrew Cooper [mailto:andrew.cooper3@xxxxxxxxxx]
>> Sent: 19 November 2015 17:09
>> On 19/11/15 16:57, Paul Durrant wrote:
>> >> From: Andrew Cooper [mailto:andrew.cooper3@xxxxxxxxxx]
>> >> Sent: 19 November 2015 16:07
>> >> On 19/11/15 13:19, Paul Durrant wrote:
>> >>> +        /*
>> >>> +         * Since ASIDs have now been flushed it just remains to
>> >>> +         * force any CPUs currently running target vCPUs out of non-
>> >>> +         * root mode. It's possible that re-scheduling has taken place
>> >>> +         * so we may unnecessarily IPI some CPUs.
>> >>> +         */
>> >>> +        if ( !cpumask_empty(pcpu_mask) )
>> >>> +            flush_tlb_mask(pcpu_mask);
>> >> Wouldn't it be easier to simply and input_params.vcpu_mask with
>> >> d->vcpu_dirty_mask ?
>> >>
> 
> Actually I realise your original statement makes no sense anyway. There is 
> no such mask as d->vcpu_dirty_mask.There is d->domain_dirty_cpumask which is 
> a 
> mask of *physical* CPUs, but since (as the name implies) input_params. 
> vcpu_mask is a mask of *virtual* CPUs then ANDing the two together would just 
> yield garbage. 
> 
>> > No, that may yield much too big a mask. All we need here is a mask of
>> where the vcpus are running *now*, not everywhere they've been.
>> 
>> The dirty mask is a "currently scheduled on" mask.
> 
> No it's not. The comment in sched.h clearly states that domain_dirty_cpumask 
> is a "Bitmask of CPUs which are holding onto this domain's state" which is, 
> as I said before, essentially everywhere the domains vcpus have been 
> scheduled since the last time state was flushed. Since, in this case, I have 
> already invalidated ASIDs for all targeted virtual CPUs I don't need to IPI 
> that many physical CPUs, I only need the mask of where the virtual CPUs are 
> *currently* running. If one of the them gets descheduled before the IPI then 
> the IPI was unnecessary (but there is no low-cost way of determining or 
> preventing that).

While you can't "and" that mask into input_params.vcpu_mask,
wouldn't using it allow you to avoid the scratch pCPU mask
variable?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.