[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Re: [PATCH RFC] x86/acpi: don't ignore I/O APICs justbecause there's no local APIC



"Jan Beulich" <JBeulich@xxxxxxxxxx> writes:

>>>> Yinghai Lu <yhlu.kernel@xxxxxxxxx> 19.06.09 07:32 >>>
>>doesn't XEN support per cpu irq vector?
>
> No.
>
>>got sth from XEN 3.3 / SLES 11
>>
>>igb 0000:81:00.0: PCI INT A -> GSI 95 (level, low) -> IRQ 95
>>igb 0000:81:00.0: setting latency timer to 64
>>igb 0000:81:00.0: Intel(R) Gigabit Ethernet Network Connection
>>igb 0000:81:00.0: eth9: (PCIe:2.5Gb/s:Width x4) 00:21:28:3a:d8:0e
>>igb 0000:81:00.0: eth9: PBA No: ffffff-0ff
>>igb 0000:81:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s)
>>vendor=8086 device=3420
>>(XEN) irq.c:847: dom0: invalid pirq 94 or vector -28
>>igb 0000:81:00.1: PCI INT B -> GSI 94 (level, low) -> IRQ 94
>>igb 0000:81:00.1: setting latency timer to 64
>>(XEN) physdev.c:87: dom0: map irq with wrong vector -28
>>map irq failed
>>(XEN) physdev.c:87: dom0: map irq with wrong vector -28
>>map irq failed
>>
>>the system need a lot of MSI-X normally.. with current mainline tree
>>kernel, it will need about 360 irq...
>
> Do you mean 360 connected devices, or just 360 IO-APIC pins (most of
> which are usually unused)? In the latter case, devices using MSI (i.e. not
> using high numbered IO-APIC pins) should work, while devices connected
> to IO-APIC pins numbered 256 and higher won't work in SLE11 as-is.
> This limitation got fixed recently in the 3.5-unstable tree, though. The
> 256 active vectors limit, however, continues to exist, so the former case
> would still not be supported by Xen.

Good question.  I know YH had a system a few years ago that exceeded 256 
vectors.
But in this case it really could be either.

Eric


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.