[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Using handle_fasteoi_irq for pirqs



 On 08/25/2010 12:52 AM, Jan Beulich wrote:
> I do however agree that using handle_level_irq() is problematic
> (see 
> http://lists.xensource.com/archives/html/xen-devel/2010-04/msg01178.html),
> but as said there I think using the fasteoi logic is preferable.

I've been looking at this again.

For non-pirq interrupts, fasteoi seems like a solid win.  It looks like
an overall simplification and I haven't seen any problems.

However, I've had more trouble extending this to pirq.  My first attempt
appeared to work in emulation, but when I run it on real hardware, msi
interrupts are not getting through.  If I boot with "pci=nomsi" then it
sometimes works, but it often crashes Xen (see separate mail).

Part of the problem is that I'm not really sure what the various
irq_chip functions are really supposed to do, and the documentation is
awful.

.startup and .shutdown I understand, and I think they're being called
when we expect them to be (ie, when the driver registers an irq for the
first time.

Using .startup/.shutdown for enable/disable seems very heavyweight.  Do
we really want to be rebinding the pirq each time?  Isn't unmask/masking
the event channel sufficient?


At the moment my xen_evtchn_do_upcall() is masking and clearing the
event channel before calling into generic_handle_irq_desc(), which will
call handle_fasteoi_irq fairly directly.  That runs straight through and
the priq_chip's eoi just does an EOI on the pirq if Xen says it needs one.

But apparently this isn't enough.  Is there anything else I should be doing?

(I just implemented PHYSDEVOP_pirq_eoi_gmfn method of getting the pirq
eoi flags, but I haven't tested it yet.  I'm also not really sure what
the advantage of it is.)

Thanks,
    J

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.