| 
    
 [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v4 2/6] VMX: Properly handle pi when all the assigned devices are removed
 >>> On 21.09.16 at 04:37, <feng.wu@xxxxxxxxx> wrote:
> +static void vmx_pi_list_cleanup(struct vcpu *v)
> +{
> +    vmx_pi_list_remove(v);
> +}
Please avoid such a no-op wrapper - the caller can easily call
vmx_pi_list_remove() directly.
> @@ -215,13 +225,28 @@ void vmx_pi_hooks_assign(struct domain *d)
>  /* This function is called when pcidevs_lock is held */
>  void vmx_pi_hooks_deassign(struct domain *d)
>  {
> +    struct vcpu *v;
> +
>      if ( !iommu_intpost || !has_hvm_container_domain(d) )
>          return;
>  
>      ASSERT(d->arch.hvm_domain.vmx.vcpu_block);
>  
> +    /*
> +     * Pausing the domain can make sure the vCPU is not
> +     * running and hence calling the hooks simultaneously
> +     * when deassigning the PI hooks and removing the vCPU
> +     * from the blocking list.
> +     */
> +    domain_pause(d);
> +
>      d->arch.hvm_domain.vmx.vcpu_block = NULL;
>      d->arch.hvm_domain.vmx.pi_do_resume = NULL;
> +
> +    for_each_vcpu ( d, v )
> +        vmx_pi_list_cleanup(v);
> +
> +    domain_unpause(d);
>  }
So you continue using pausing, and I continue to miss the argumentation
of why you can't do without (even if previously the discussion was for
patch 4, but it obviously applies here as well).
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
 
 
  | 
  
![]()  | 
            
         Lists.xenproject.org is hosted with RackSpace, monitoring our  |