[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] irq_guest_eoi_timer interaction with MSI



>>> Keir Fraser <keir.fraser@xxxxxxxxxxxxx> 13.11.08 17:50 >>>
>On 13/11/08 16:43, "Jan Beulich" <jbeulich@xxxxxxxxxx> wrote:
>
>> Up to now, MSI didn't require an EOI, and devices that support masking (in
>> particular all MSI-X ones) wouldn't generally require an EOI even with the
>> patch send earlier. What you propose would make them all require an EOI
>> all of the sudden, despite them needing hypervisor assistance only when
>> the interrupt got masked.
>> 
>>> Also I'll add we currently do a hypercall for every level-triggered IO-APIC
>>> IRQ, which was really all we supported until recently. Seemed to work well
>>> enough performance-wise in that case.
>
>So we'd add a pirq-indexed bitmap to mitigate that. Whether we use
>PHYSDEVOP_irq_eoi or EVTCHNOP_unmask, we need a new shared-memory bitmap,
>right? Might as well use irq_eoi and index by pirq, I'd say.

Hmm, I'm still not convinced: With what you propose, it's unclear to me who
would when clear the bit in that bitmap for the 'temporarily masked' case.
Anyway, unless you get to implement your version earlier (and thus
convince me that things will work out correctly), I'll try to get implemented
what I would think should be appropriate here once I find time to do so.

The other concern I have (as a consequence of the NR_IRQS related
discussion) is that adding a NR_IRQS (or NR_PIRQS) indexed bitmap
to shared_info seems problematic wrt forward compatibility: You hinted at
making the value build-time configurable, and even if it remained a
manifest constant, any value (you suggested 1024) would only call for
being too small at some future point in time (as will unavoidably be the
case for the number of event channels).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.