[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] another regression from IRQ handling changes



On 22/09/2009 09:53, "Jan Beulich" <JBeulich@xxxxxxxxxx> wrote:

>> If it wasn't broken before, it was probably broken by Xiantao's follow-up
>> patch to fix NetBSD dom0 (at least as much as possible, to avoid a nasty
>> regression with NetBSD). What we probably need to do is have a 256-entry
>> dom0_vector_to_dom0_irq[] array, and allocate an entry from that for every
>> fresh irq we see at PHYSDEVOP_alloc_irq_vector. Then when the vector gets
>> passed back in on ioapic writes, we index into that array rather than using
>> naked rte.vector.
> 
> Yeah, that's what I would view as the solution to get old functionality
> back. But my question also extended to possible solutions to get beyond
> 256 here, especially such that are also acceptable to the pv-ops Dom0,
> which I'm much less certain about.

Well, we could assume that if we see an irq greater than 256 at
PHYSDEVOP_alloc_irq_vector then the dom0 is dealing in GSIs, and in that
case the 'vector' we return and then gets passed to ioapic_write is rather
redundant. We can work out the GSI from the ioapic rte that is being
modified, after all. So perhaps we could just flip into a non-legacy mode of
operation in that case (perhaps reserve a single magic 'vector' value to
return from PHYSDEVOP_alloc_irq_vector in this case, and if we see that
value in the ioapic write handler then we know to assume that guest pirq ==
gsi).

The legacy case is just to handle NetBSD, which throws non-GSIs at the
PHYSDEVOP_alloc_irq_vector interface, and I doubt it will have worked with
those mega-big systems at any time. So I don't think we're dealing with a
regression there.

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.