[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/IRQ: make internally used IRQs also honor the pending EOI stack



On 28.11.2019 15:30, Roger Pau Monné  wrote:
> On Thu, Nov 28, 2019 at 03:19:50PM +0100, Jan Beulich wrote:
>> On 28.11.2019 15:13, Roger Pau Monné  wrote:
>>> On Thu, Nov 28, 2019 at 02:33:08PM +0100, Jan Beulich wrote:
>>>> On 28.11.2019 12:39, Roger Pau Monné wrote:
>>>>> On Thu, Nov 28, 2019 at 12:03:47PM +0100, Jan Beulich wrote:
>>>>>> At the time the pending EOI stack was introduced there were no
>>>>>> internally used IRQs which would have the LAPIC EOI issued from the
>>>>>> ->end() hook. This had then changed with the introduction of IOMMUs,
>>>>>> but the interaction issue was presumably masked by
>>>>>> irq_guest_eoi_timer_fn() frequently EOI-ing interrupts way too early
>>>>>> (which got fixed by 359cf6f8a0ec ["x86/IRQ: don't keep EOI timer
>>>>>> running without need"]).
>>>>>>
>>>>>> The problem is that with us re-enabling interrupts across handler
>>>>>> invocation, a higher priority (guest) interrupt may trigger while
>>>>>> handling a lower priority (internal) one. The EOI issued from
>>>>>> ->end() (for ACKTYPE_EOI kind interrupts) would then mistakenly
>>>>>> EOI the higher priority (guest) interrupt, breaking (among other
>>>>>> things) pending EOI stack logic's assumptions.
>>>>>
>>>>> Maybe there's something that I'm missing, but shouldn't hypervisor
>>>>> vectors always be higher priority than guest ones?
>>>>
>>>> Depends - IOMMU ones imo aren't something that needs urgently
>>>> dealing with, so a little bit of delay won't hurt. There would
>>>> only be a problem if such interrupts could be deferred
>>>> indefinitely.
>>>>
>>>>> I see there's already a range reserved for high priority vectors
>>>>> ({FIRST/LAST}_HIPRIORITY_VECTOR), what's the reason for hypervisor
>>>>> interrupts not using this range?
>>>>
>>>> We'd quickly run out of high priority vectors on systems with
>>>> multiple (and perhaps indeed many) IOMMUs.
>>>
>>> Well, there's no limit on the number of high priority vectors, since
>>> this is all a software abstraction. It only matters that such vectors
>>> are higher than guest owned ones.
>>>
>>> I have to take a look, but I would think that Xen used vectors are the
>>> first ones to be allocated, and hence could start from
>>> FIRST_HIPRIORITY_VECTOR - 1 and go down from there.
>>
>> If this was the case, then we wouldn't have observed the issue (despite
>> it being there) this patch tries to address. The IOMMUs for both Andrew
>> and me ended up using vector 0x28, below everything that e.g. the
>> IO-APIC RTE got assigned.
> 
> I know it's not like that ATM, and hence I wonder whether it would be
> possible to make it so: Xen vectors get allocated down from
> FIRST_HIPRIORITY_VECTOR - 1 and then we won't have this issue.
> 
>> Also don't forget that we don't allocate
>> vectors continuously, but such that they'd get spread across the
>> different priority levels. (Whether that's an awfully good idea is a
>> separate question.)
> 
> Well, vectors used by Xen would be allocated downwards continuously
> from FIRST_HIPRIORITY_VECTOR - 1, and hence won't be spread.
> 
> Guest used vectors could continue to use the same allocation
> mechanism, since that's a different issue.

The issue would go away only if guest vectors are at strictly
lower priority than Xen ones. I.e. we'd need to go in steps of
16. And there aren't that many vectors ... (I'm happy to see
changes here, but it'll need to be very careful ones.)

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.