[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/3] VMX: Properly adjuest the status of pi descriptor



>>> On 20.05.16 at 10:53, <feng.wu@xxxxxxxxx> wrote:
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -107,12 +107,22 @@ void vmx_pi_per_cpu_init(unsigned int cpu)
>  static void vmx_vcpu_block(struct vcpu *v)
>  {
>      unsigned long flags;
> -    unsigned int dest;
> +    unsigned int dest = cpu_physical_id(v->processor);
>      spinlock_t *old_lock;
>      spinlock_t *pi_blocking_list_lock =
>               &per_cpu(vmx_pi_blocking, v->processor).lock;
>      struct pi_desc *pi_desc = &v->arch.hvm_vmx.pi_desc;
>  
> +    if (v->arch.hvm_vmx.pi_back_from_hotplug == 1)

Coding style (missing blanks). Also the flag is of boolean nature, so
it should be declared bool_t and you should drop the "== 1" here.

> +    {
> +        write_atomic(&pi_desc->ndst,
> +                     x2apic_enabled ? dest : MASK_INSR(dest, 
> PI_xAPIC_NDST_MASK));
> +        write_atomic(&pi_desc->nv, posted_intr_vector);
> +        pi_clear_sn(pi_desc);
> +
> +        v->arch.hvm_vmx.pi_back_from_hotplug = 0;
> +    }
> +
>      spin_lock_irqsave(pi_blocking_list_lock, flags);
>      old_lock = cmpxchg(&v->arch.hvm_vmx.pi_blocking.lock, NULL,
>                         pi_blocking_list_lock);
> @@ -130,8 +140,6 @@ static void vmx_vcpu_block(struct vcpu *v)
>  
>      ASSERT(!pi_test_sn(pi_desc));
>  
> -    dest = cpu_physical_id(v->processor);
> -
>      ASSERT(pi_desc->ndst ==
>             (x2apic_enabled ? dest : MASK_INSR(dest, PI_xAPIC_NDST_MASK)));
>  
> @@ -164,6 +172,16 @@ static void vmx_pi_do_resume(struct vcpu *v)
>      unsigned long flags;
>      spinlock_t *pi_blocking_list_lock;
>      struct pi_desc *pi_desc = &v->arch.hvm_vmx.pi_desc;
> +    unsigned int dest = cpu_physical_id(v->processor);
> +
> +    if (v->arch.hvm_vmx.pi_back_from_hotplug == 1)
> +    {
> +        write_atomic(&pi_desc->ndst,
> +                     x2apic_enabled ? dest : MASK_INSR(dest, 
> PI_xAPIC_NDST_MASK));
> +        pi_clear_sn(pi_desc);
> +
> +        v->arch.hvm_vmx.pi_back_from_hotplug = 0;
> +    }

Compared with the adjustment above you don't write ->nv here, as
that happens ...

>      ASSERT(!test_bit(_VPF_blocked, &v->pause_flags));

after this ASSERT() (which suggests that your code addition would
better go here). That raises the question of moving the entire body
of the first hunk's if() into a helper function, which then would also
be used here. That would then also address my question about
ordering: Shouldn't pi_clear_sn() always happen last?

Furthermore, are both of these additions really eliminating the
issue rather than just reducing the likelihood for it to occur? I.e.
what if the new flag gets set immediately after either of the if()-s
above?

> @@ -202,9 +220,14 @@ static void vmx_pi_do_resume(struct vcpu *v)
>  /* This function is called when pcidevs_lock is held */
>  void vmx_pi_hooks_assign(struct domain *d)
>  {
> +    struct vcpu *v;
> +
>      if ( !iommu_intpost || !has_hvm_container_domain(d) )
>          return;
>  
> +    for_each_vcpu ( d, v )
> +        v->arch.hvm_vmx.pi_back_from_hotplug = 1;

Thinking of possible alternatives: What about doing the necessary
adjustments right here, instead of setting the new flag?

Of course it may still be the case that the whole issue means we
indeed shouldn't zap those hooks upon device removal.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.