[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3] x86/VPMU: Clear last_vcpu when destroying VPMU



On Wed, Dec 17, 2014 at 10:35:47AM -0500, Boris Ostrovsky wrote:
> We need to make sure that last_vcpu is not pointing to VCPU whose
> VPMU is being destroyed. Otherwise we may try to dereference it in
> the future, when VCPU is gone.
> 
> We have to do this atomically since otherwise there is a (somewhat
> theoretical) chance that between test and subsequent clearing
> of last_vcpu the remote processor (i.e. vpmu->last_pcpu) might do
> both vpmu_load() and then vpmu_save() for another VCPU. The former
> will clear last_vcpu and the latter will set it to something else.
> 
> We should also check for VPMU_CONTEXT_ALLOCATED in vpmu_destroy() to
> avoid unnecessary cmpxchg() and arch-specific destroy ops. Thus
> checks in AMD and Intel routines are no longer needed.
> 
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
> ---
>  xen/arch/x86/hvm/svm/vpmu.c       |    3 ---
>  xen/arch/x86/hvm/vmx/vpmu_core2.c |    2 --
>  xen/arch/x86/hvm/vpmu.c           |    7 +++++++
>  3 files changed, 7 insertions(+), 5 deletions(-)
> 
> Changes in v3:
>  * Use cmpxchg instead of IPI
>  * Use correct routine nemas in commit message
>  * Remove duplicate test for VPMU_CONTEXT_ALLOCATED in arch-specific
>    destroy ops
> 
> Changes in v2:
>  * Test last_vcpu locally before IPI
>  * Don't handle local pcpu as a special case --- on_selected_cpus
>    will take care of that
>  * Dont't cast variables unnecessarily
> 
> diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
> index 8e07a98..4c448bb 100644
> --- a/xen/arch/x86/hvm/svm/vpmu.c
> +++ b/xen/arch/x86/hvm/svm/vpmu.c
> @@ -403,9 +403,6 @@ static void amd_vpmu_destroy(struct vcpu *v)
>  {
>      struct vpmu_struct *vpmu = vcpu_vpmu(v);
>  
> -    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
> -        return;
> -
>      if ( ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
>          amd_vpmu_unset_msr_bitmap(v);
>  
> diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c 
> b/xen/arch/x86/hvm/vmx/vpmu_core2.c
> index 68b6272..590c2a9 100644
> --- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
> +++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
> @@ -818,8 +818,6 @@ static void core2_vpmu_destroy(struct vcpu *v)
>      struct vpmu_struct *vpmu = vcpu_vpmu(v);
>      struct core2_vpmu_context *core2_vpmu_cxt = vpmu->context;
>  
> -    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
> -        return;
>      xfree(core2_vpmu_cxt->pmu_enable);
>      xfree(vpmu->context);
>      if ( cpu_has_vmx_msr_bitmap )
> diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
> index 1df74c2..7cc95ae 100644
> --- a/xen/arch/x86/hvm/vpmu.c
> +++ b/xen/arch/x86/hvm/vpmu.c
> @@ -250,6 +250,13 @@ void vpmu_initialise(struct vcpu *v)
>  void vpmu_destroy(struct vcpu *v)
>  {
>      struct vpmu_struct *vpmu = vcpu_vpmu(v);
> +    struct vcpu **last = &per_cpu(last_vcpu, vpmu->last_pcpu);
> +
> +    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
> +        return;
> +

Could this just be:

        (void)cmpxchg(&per_cpu(last_vcpu, vpmu->last_pcpu), v, NULL);

so that we don't do the 'per_cpu' access in case we return because
VPMU_CONTEXT_ALLOCATED is not set?

Either way,

Release-Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>

Thank you for spotting this.
> +    /* Need to clear last_vcpu in case it points to v */
> +    (void)cmpxchg(last, v, NULL);
>  
>      if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_destroy )
>          vpmu->arch_vpmu_ops->arch_vpmu_destroy(v);
> -- 
> 1.7.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.