[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 4/4] xen: events: allocate GSIs and dynamic IRQs from separate IRQ ranges.



On Tue, 2011-01-11 at 19:14 +0000, Konrad Rzeszutek Wilk wrote: 
> On Tue, Jan 11, 2011 at 05:20:16PM +0000, Ian Campbell wrote:

> >  retry:
> > -   /* This loop starts from the top of IRQ space and goes down.
> > -    * We need this b/c if we have a PCI device in a Xen PV guest
> > -    * we do not have an IO-APIC (though the backend might have them)
> > -    * mapped in. To not have a collision of physical IRQs with the Xen
> > -    * event channels start at the top of the IRQ space for virtual IRQs.
> > -    */
> > -   for (irq = top; irq > bottom; irq--) {
> > -           data = irq_get_irq_data(irq);
> > -           /* only 15->0 have init'd desc; handle irq > 16 */
> > -           if (!data)
> > -                   break;
> > -           if (data->chip == &no_irq_chip)
> > -                   break;
> > -           if (data->chip != &xen_dynamic_chip)
> > -                   continue;
> > -           if (irq_info[irq].type == IRQT_UNBOUND)
> > -                   return irq;
> > -   }
> > +   irq = irq_alloc_desc_from(first, -1);
> >  
> > -   if (irq == bottom)
> > -           goto no_irqs;
> > -
> > -   res = irq_alloc_desc_at(irq, -1);
> > -   if (res == -EEXIST) {
> > -           top--;
> > -           if (bottom > top)
> > -                   printk(KERN_ERR "Eating in GSI/MSI space (%d)!" \
> > -                           " Your PCI device might not work!\n", top);
> > -           if (top > NR_IRQS_LEGACY)
> > -                   goto retry;
> > +   if (irq == -ENOMEM && first > NR_IRQS_LEGACY) {
> > +           printk(KERN_ERR "Out of dynamic IRQ space and eating into GSI 
> > space. You should increase nr_irqs\n");
> > +           first = max(NR_IRQS_LEGACY, first - NR_IRQS_LEGACY);
> > +           goto retry;
> 
> You don't check for irq == -EEXIST. So if specific IRQ (first) is already
> occupied you panic. Would it be better to check for that too in this got
> and retry with that value?

It's not obvious due to the way diff has chosen to represent this change
but this is checking the return value of irq_alloc_desc_from and not
irq_alloc_desc_at. The former cannot return EEXIST.

> >     }
> >  
> > -   if (WARN_ON(res != irq))
> > -           return -1;
> > +   if (irq < 0)
> > +           panic("No available IRQ to bind to: increase nr_irqs!\n");
> >  
> >     return irq;
> > -
> > -no_irqs:
> > -   panic("No available IRQ to bind to: increase nr_irqs!\n");
> > -}
> > -
> > -static bool identity_mapped_irq(unsigned irq)
> > -{
> > -   /* identity map all the hardware irqs */
> > -   return irq < get_nr_hw_irqs();
> >  }
> >  
> >  static int xen_allocate_irq_gsi(unsigned gsi)
> >  {
> >     int irq;
> >  
> > -   if (!identity_mapped_irq(gsi) &&
> > -       (xen_initial_domain() || !xen_pv_domain()))
> > +   /*
> > +    * A PV guest has no concept of a GSI (since it has no ACPI
> > +    * nor access to/knowledge of the physical APICs). Therefore
> > +    * all IRQs are dynamically allocated from the entire IRQ
> > +    * space.
> > +    */
> > +   if (xen_pv_domain() && !xen_initial_domain())
> >             return xen_allocate_irq_dynamic();
> 
> OK. So with a IDE disk at IRQ 14 it can end up with irq = 5 (say the first 
> five
> IRQs are used by xen-spinlock, xen-timer, xen-debug, xen-console, and 
> something else).
> The gsi that gets stuck via the mk_pirq_info[5] ends up being 14, and pirq = 
> 14.
> When you do EVTCHNOP_bind_pirq you end up passing in pirq=14 and bind that to 
> the
> Linux irq five, right?

Actually because the core registers the first NR_IRQS_LEGACY interrupts
the PV interrupts actually start at 16 so 5 is a bad example but the
gist is otherwise correct, yes.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.