[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Interrupt to CPU routing in HVM domains - again



On 6/9/08 08:48, "Keir Fraser" <keir.fraser@xxxxxxxxxxxxx> wrote:

>> You could do a bunch of that just by distributing them from the single
>> callback IRQ. But I suppose it would be nice to move to a
>> one-IRQ-per-evtchn model. You'd need to keep the ABI of course, so you'd
>> need a feature flag or something.
> 
> Yes, it should work as follows:
> one-IRQ-per-evtchn and turn off the usual PV per-VCPU selector word and
> evtchn_pending master flag (as we have already disabled the evtchn_mask master
> flag). Then all evtchn re-routing would be handled through the IO-APIC like
> all other emulated IRQs.

Oh, another way I like (on the Xen side of the interface, at least) is to
keep the PV evtchn VCPU binding mechanisms and per-VCPU selector word and
evtchn_pending master flag. But then do 'direct FSB injection' to specific
VCPU LAPICs. That is, each VCPU would receive its event-channel
notifications on a pre-arranged vector via a simulated message to its LAPIC.

This is certainly easier to implement on the Xen side, and I would say
neater too. However, its significant drawback is that this doesn't likely
fit very well with existing OS IRQ subsystems:
 * OS still needs to demux interrupts and hence essentially has a nested
layer of interrupt delivery effectively (if we want the specific event
channels to be visible as distinct interrupts to the OS).
 * There is a PV-specific way of reassigning 'IRQs' to VCPUs. The usual OS
methods of tickling the IO-APIC will not apply.
 * The OS may well not even have a way of allocating an interrupt vector or
otherwise registering interest in, and receiving CPU-specific interrupts on,
a specific interrupt vector.

Obviously this could all be worked through for Linux guests by extending our
existing pv_ops implementation a little. I think it would fit well. But I
have doubts this could work well for other OSes where we can make less
far-reaching changes (Windows for example).

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.