[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] NR_PIRQS vs. NR_IRQS


  • To: Jan Beulich <jbeulich@xxxxxxxxxx>
  • From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
  • Date: Fri, 14 Nov 2008 08:00:37 +0000
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Fri, 14 Nov 2008 00:01:01 -0800
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AclGLjird2KETbIhEd2xqwAWy6hiGQAANtbc
  • Thread-topic: [Xen-devel] NR_PIRQS vs. NR_IRQS

On 14/11/08 07:54, "Keir Fraser" <keir.fraser@xxxxxxxxxxxxx> wrote:

>>> I agree with keeping this naming distinction of course, although I think
>>> allowing NR_IRQS > NR_VECTORS right now is not very useful. But maybe you
>>> have a box in mind that needs it?
>> 
>> I had sent a mail a few days ago on this, where IBM was testing 96 CPU
>> support (4-node system), and it crashing because of a PIRQ ending up in
>> DYNIRQ space (kernel perspective), because there being 300+ IO-APIC
>> pins. While the crash ought to be fixed with the subsequent patch, it's
>> clear that none of the devices with an accumulated pin number greater
>> than 255 will actually work on that system.
> 
> Oh dear. :-D

Is fixing this actually any harder than just bumping NR_IRQS/NR_PIRQS in Xen
and NR_PIRQS in Linux? Have IRQS and VECTORS got somehow accidentally tied
together in Xen?

These parameters should probably be build-time configurable.

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.