[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/IRQ: make internally used IRQs also honor the pending EOI stack



On 28.11.2019 12:39, Roger Pau Monné wrote:
> On Thu, Nov 28, 2019 at 12:03:47PM +0100, Jan Beulich wrote:
>> At the time the pending EOI stack was introduced there were no
>> internally used IRQs which would have the LAPIC EOI issued from the
>> ->end() hook. This had then changed with the introduction of IOMMUs,
>> but the interaction issue was presumably masked by
>> irq_guest_eoi_timer_fn() frequently EOI-ing interrupts way too early
>> (which got fixed by 359cf6f8a0ec ["x86/IRQ: don't keep EOI timer
>> running without need"]).
>>
>> The problem is that with us re-enabling interrupts across handler
>> invocation, a higher priority (guest) interrupt may trigger while
>> handling a lower priority (internal) one. The EOI issued from
>> ->end() (for ACKTYPE_EOI kind interrupts) would then mistakenly
>> EOI the higher priority (guest) interrupt, breaking (among other
>> things) pending EOI stack logic's assumptions.
> 
> Maybe there's something that I'm missing, but shouldn't hypervisor
> vectors always be higher priority than guest ones?

Depends - IOMMU ones imo aren't something that needs urgently
dealing with, so a little bit of delay won't hurt. There would
only be a problem if such interrupts could be deferred
indefinitely.

> I see there's already a range reserved for high priority vectors
> ({FIRST/LAST}_HIPRIORITY_VECTOR), what's the reason for hypervisor
> interrupts not using this range?

We'd quickly run out of high priority vectors on systems with
multiple (and perhaps indeed many) IOMMUs.

> IMO it seems troublesome that pending guests vectors can delay the
> injection of hypervisor interrupt vectors.

As per above - depends.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.