[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 6/9] x86/np2m: send flush IPIs only when a vcpu is actively using an np2m



On 10/03/2017 04:21 PM, Sergey Dyasli wrote:
> Flush IPIs are sent to all cpus in an np2m's dirty_cpumask when
> updated.  This mask however is far too broad.  A pcpu's bit is set in
> the cpumask when a vcpu runs on that pcpu, but is only cleared when a
> flush happens.  This means that the IPI includes the current pcpu of
> vcpus that are not currently running, and also includes any pcpu that
> has ever had a vcpu use this p2m since the last flush (which in turn
> will cause spurious invalidations if a different vcpu is using an np2m).
> 
> Avoid these IPIs by keeping closer track of where an np2m is being used,
> and when a vcpu needs to be flushed:
> 
> - On schedule-out, clear v->processor in p2m->dirty_cpumask
> - Add a 'generation' counter to the p2m and nestedvcpu structs to
>   detect changes that would require re-loads on re-entry
> - On schedule-in or p2m change:
>   - Set v->processor in p2m->dirty_cpumask
>   - flush the vcpu's nested p2m pointer (and update nv->generation) if
>     the generation changed
> 
> Signed-off-by: Sergey Dyasli <sergey.dyasli@xxxxxxxxxx>
> Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxx>
> ---
> v2 --> v3:
> - current pointer is now calculated only once in np2m_schedule()
> - Replaced "shadow p2m" with "np2m" for consistency in commit message

Looks good, thanks!
 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.