[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Ping: [PATCH] x86/IO-APIC: fix guest RTE write corner cases



>>> On 08.05.13 at 14:51, "Jan Beulich" <JBeulich@xxxxxxxx> wrote:
> This fixes two regressions from c/s 20143:a7de5bd776ca ("x86: Make the
> hypercall PHYSDEVOP_alloc_irq_vector hypercall dummy"):
> 
> For one, IRQs that had their vector set up by Xen internally without a
> handler ever having got set (e.g. via "com<n>=..." without a matching
> consumer option like "console=com<n>") would wrongly call
> add_pin_to_irq() here, triggering the BUG_ON() in that function.
> 
> Second, when assign_irq_vector() fails this addition to irq_2_pin[]
> needs to be undone.
> 
> In the context of this I'm also surprised that the irq_2_pin[]
> manipulations here occur without any lock, i.e. rely on Dom0 to do
> some sort of serialization.
> 
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>

No-one having any opinion? I'm hesitant to commit changes like
this without anyone having said any word...

Jan

> --- a/xen/arch/x86/io_apic.c
> +++ b/xen/arch/x86/io_apic.c
> @@ -133,6 +133,37 @@ static void add_pin_to_irq(unsigned int 
>      share_vector_maps(irq_2_pin[irq].apic, apic);
>  }
>  
> +static void remove_pin_from_irq(unsigned int irq, int apic, int pin)
> +{
> +    struct irq_pin_list *entry, *prev;
> +
> +    for (entry = &irq_2_pin[irq]; ; entry = &irq_2_pin[entry->next]) {
> +        if ((entry->apic == apic) && (entry->pin == pin))
> +            break;
> +        BUG_ON(!entry->next);
> +    }
> +
> +    entry->pin = entry->apic = -1;
> +
> +    if (entry != &irq_2_pin[irq]) {
> +        /* Removed entry is not at head of list. */
> +        prev = &irq_2_pin[irq];
> +        while (&irq_2_pin[prev->next] != entry)
> +            prev = &irq_2_pin[prev->next];
> +        prev->next = entry->next;
> +    } else if (entry->next) {
> +        /* Removed entry is at head of multi-item list. */
> +        prev  = entry;
> +        entry = &irq_2_pin[entry->next];
> +        *prev = *entry;
> +        entry->pin = entry->apic = -1;
> +    } else
> +        return;
> +
> +    entry->next = irq_2_pin_free_entry;
> +    irq_2_pin_free_entry = entry - irq_2_pin;
> +}
> +
>  /*
>   * Reroute an IRQ to a different pin.
>   */
> @@ -2280,7 +2311,7 @@ int ioapic_guest_read(unsigned long phys
>  
>  int ioapic_guest_write(unsigned long physbase, unsigned int reg, u32 val)
>  {
> -    int apic, pin, irq, ret, vector, pirq;
> +    int apic, pin, irq, ret, pirq;
>      struct IO_APIC_route_entry rte = { 0 };
>      unsigned long flags;
>      struct irq_desc *desc;
> @@ -2348,13 +2379,25 @@ int ioapic_guest_write(unsigned long phy
>          return 0;
>      }
>  
> -    if ( desc->arch.vector <= 0 || desc->arch.vector > LAST_DYNAMIC_VECTOR ) 
> {
> -        add_pin_to_irq(irq, apic, pin);
> -        vector = assign_irq_vector(irq, NULL);
> -        if ( vector < 0 )
> -            return vector;
> +    if ( desc->arch.vector <= 0 || desc->arch.vector > LAST_DYNAMIC_VECTOR )
> +    {
> +        int vector = desc->arch.vector;
> +
> +        if ( vector < FIRST_HIPRIORITY_VECTOR )
> +            add_pin_to_irq(irq, apic, pin);
> +        else
> +            desc->arch.vector = IRQ_VECTOR_UNASSIGNED;
> +        ret = assign_irq_vector(irq, NULL);
> +        if ( ret < 0 )
> +        {
> +            if ( vector < FIRST_HIPRIORITY_VECTOR )
> +                remove_pin_from_irq(irq, apic, pin);
> +            else
> +                desc->arch.vector = vector;
> +            return ret;
> +        }
>  
> -        printk(XENLOG_INFO "allocated vector %02x for irq %d\n", vector, 
> irq);
> +        printk(XENLOG_INFO "allocated vector %02x for irq %d\n", ret, irq);
>      }
>      spin_lock(&dom0->event_lock);
>      ret = map_domain_pirq(dom0, pirq, irq,




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.