[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCHv1 2/2] passthrough: improve locking when iterating over interrupts bound to VMs



On Fri, Oct 23, 2015 at 12:05:22PM +0100, David Vrabel wrote:
> radix_tree_gang_lookup() only requires a RCU read lock, not the
> per-domain event_lock.

Don't you need to make some form of 'spin_lock_init' call?

> 
> Introduce a new RCU read lock and take the per-interrupt lock before
> calling the callback instead.
> 
> This eliminates all contention on the event_lock when injecting
> interrupts from passthrough devices.

This pt_pirq_iterate is called from:

hvm_migrate_pirqs (which only uses irq_desc lock) - so you could
also drop the event_lock there.

pt_irq_time_out (which  is only called for legacy interrupts and
only at 8ms). Uses event_lock.

pci_clean_dpci_irqs (during domain shutdown, uses event_lock).

Also should this rcu lock be put in 'pirq_guest_unmask' (which oddly
enough has no lock?!)?

Reading the radix-tree.h t says: "
..
The notable exceptions to this rule are the following functions:
 ...
  radix_tree_gang_lookup

The first 7 functions are able to be called locklessly, using RCU. The
caller must ensure calls to these functions are made within rcu_read_lock()
regions."

Great, so that is what your patch does. Thought it would imply that
everybody using radix_tree_gang_lookup on pirq_tree needs to use RCU?

> 
> Signed-off-by: David Vrabel <david.vrabel@xxxxxxxxxx>
> ---
>  xen/drivers/passthrough/io.c | 10 +++++++---
>  xen/include/xen/sched.h      |  1 +
>  2 files changed, 8 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/io.c
> index 7c86c20..9b51ef0 100644
> --- a/xen/drivers/passthrough/io.c
> +++ b/xen/drivers/passthrough/io.c
> @@ -601,7 +601,7 @@ int pt_pirq_iterate(struct domain *d,
>      unsigned int pirq = 0, n, i;
>      struct pirq *pirqs[8];
>  
> -    ASSERT(spin_is_locked(&d->event_lock));
> +    rcu_read_lock(&d->pirq_rcu_lock);
>  
>      do {
>          n = radix_tree_gang_lookup(&d->pirq_tree, (void **)pirqs, pirq,
> @@ -610,12 +610,18 @@ int pt_pirq_iterate(struct domain *d,
>          {
>              struct hvm_pirq_dpci *pirq_dpci = pirq_dpci(pirqs[i]);
>  
> +            spin_lock(&pirq_dpci->lock);
> +
>              pirq = pirqs[i]->pirq;
>              if ( (pirq_dpci->flags & HVM_IRQ_DPCI_MAPPED) )
>                  rc = cb(d, pirq_dpci, arg);
> +
> +            spin_unlock(&pirq_dpci->lock);
>          }
>      } while ( !rc && ++pirq < d->nr_pirqs && n == ARRAY_SIZE(pirqs) );
>  
> +    rcu_read_unlock(&d->pirq_rcu_lock);
> +
>      return rc;
>  }
>  
> @@ -678,9 +684,7 @@ void hvm_dpci_msi_eoi(struct domain *d, int vector)
>      if ( !iommu_enabled || !d->arch.hvm_domain.irq.dpci )
>         return;
>  
> -    spin_lock(&d->event_lock);
>      pt_pirq_iterate(d, _hvm_dpci_msi_eoi, (void *)(long)vector);
> -    spin_unlock(&d->event_lock);
>  }
>  
>  static void hvm_dirq_assist(struct domain *d, struct hvm_pirq_dpci 
> *pirq_dpci)
> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
> index 3729b0f..ae98c1e 100644
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -355,6 +355,7 @@ struct domain
>       */
>      struct radix_tree_root pirq_tree;
>      unsigned int     nr_pirqs;
> +    rcu_read_lock_t pirq_rcu_lock;
>  
>      enum guest_type guest_type;
>  
> -- 
> 2.1.4
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.