[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86: fix ordering of operations in destroy_irq()



On 29/05/2013 07:58, "Jan Beulich" <JBeulich@xxxxxxxx> wrote:

> The fix for XSA-36, switching the default of vector map management to
> be per-device, exposed more readily a problem with the cleanup of these
> vector maps: dynamic_irq_cleanup() clearing desc->arch.used_vectors
> keeps the subsequently invoked clear_irq_vector() from clearing the
> bits for both the in-use and a possibly still outstanding old vector.
> 
> Fix this by folding dynamic_irq_cleanup() into destroy_irq(), which was
> its only caller, deferring the clearing of the vector map pointer until
> after clear_irq_vector().
> 
> Once at it, also defer resetting of desc->handler until after the loop
> around smp_mb() checking for IRQ_INPROGRESS to be clear, fixing a
> (mostly theoretical) issue with the intercation with do_IRQ(): If we
> don't defer the pointer reset, do_IRQ() could, for non-guest IRQs, call
> ->ack() and ->end() with different ->handler pointers, potentially
> leading to an IRQ remaining un-acked. The issue is mostly theoretical
> because non-guest IRQs are subject to destroy_irq() only on (boot time)
> error paths.
> 
> As to the changed locking: Invoking clear_irq_vector() with desc->lock
> held is okay because vector_lock already nests inside desc->lock (proven
> by set_desc_affinity(), which takes vector_lock and gets called from
> various desc->handler->ack implementations, getting invoked with
> desc->lock held).
> 
> Reported-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>

Acked-by: Keir Fraser <keir@xxxxxxx>

> --- a/xen/arch/x86/irq.c
> +++ b/xen/arch/x86/irq.c
> @@ -197,12 +197,24 @@ int create_irq(int node)
>      return irq;
>  }
>  
> -static void dynamic_irq_cleanup(unsigned int irq)
> +void destroy_irq(unsigned int irq)
>  {
>      struct irq_desc *desc = irq_to_desc(irq);
>      unsigned long flags;
>      struct irqaction *action;
>  
> +    BUG_ON(!MSI_IRQ(irq));
> +
> +    if ( dom0 )
> +    {
> +        int err = irq_deny_access(dom0, irq);
> +
> +        if ( err )
> +            printk(XENLOG_G_ERR
> +                   "Could not revoke Dom0 access to IRQ%u (error %d)\n",
> +                   irq, err);
> +    }
> +
>      spin_lock_irqsave(&desc->lock, flags);
>      desc->status  |= IRQ_DISABLED;
>      desc->status  &= ~IRQ_GUEST;
> @@ -210,16 +222,19 @@ static void dynamic_irq_cleanup(unsigned
>      action = desc->action;
>      desc->action  = NULL;
>      desc->msi_desc = NULL;
> -    desc->handler = &no_irq_type;
> -    desc->arch.used_vectors = NULL;
>      cpumask_setall(desc->affinity);
>      spin_unlock_irqrestore(&desc->lock, flags);
>  
>      /* Wait to make sure it's not being used on another CPU */
>      do { smp_mb(); } while ( desc->status & IRQ_INPROGRESS );
>  
> -    if (action)
> -        xfree(action);
> +    spin_lock_irqsave(&desc->lock, flags);
> +    desc->handler = &no_irq_type;
> +    clear_irq_vector(irq);
> +    desc->arch.used_vectors = NULL;
> +    spin_unlock_irqrestore(&desc->lock, flags);
> +
> +    xfree(action);
>  }
>  
>  static void __clear_irq_vector(int irq)
> @@ -286,24 +301,6 @@ void clear_irq_vector(int irq)
>      spin_unlock_irqrestore(&vector_lock, flags);
>  }
>  
> -void destroy_irq(unsigned int irq)
> -{
> -    BUG_ON(!MSI_IRQ(irq));
> -
> -    if ( dom0 )
> -    {
> -        int err = irq_deny_access(dom0, irq);
> -
> -        if ( err )
> -            printk(XENLOG_G_ERR
> -                   "Could not revoke Dom0 access to IRQ%u (error %d)\n",
> -                   irq, err);
> -    }
> -
> -    dynamic_irq_cleanup(irq);
> -    clear_irq_vector(irq);
> -}
> -
>  int irq_to_vector(int irq)
>  {
>      int vector = -1;
> 
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.