[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: [GIT PULL] Xen APIC hooks (with io_apic_ops)



* Avi Kivity <avi@xxxxxxxxxx> wrote:

> Ingo Molnar wrote:
>>> IO APIC operations are not even slightly performance critical? Are  
>>> they ever used on the interrupt delivery path?
>>>     
>>
>> Since they are not performance critical, then why doesnt Xen catch the 
>> IO-APIC accesses, and virtualizes the device?
>>
>> If you want to hook into the IO-APIC code at such a low level, why  
>> dont you hook into the _hardware_ API - i.e. catch those setup/routing 
>> modifications to the IO-APIC space. No Linux changes are needed in that 
>> case.
>>   
>
> When x2apic is enabled, and EOI broadcast is disabled, then the io 
> apic does become a hot path - it needs to be written for each 
> level-triggered interrupt EOI.  In this case I might want to 
> paravirtualize the EOI write to exit only if an interrupt is 
> pending; otherwise communicate via shared memory.
>
> We do something similar for Windows (by patching it) very 
> successfully; Windows likes to touch the APIC TPR ~ 100,000 times 
> per second, usually without triggering an interrupt.  We hijack 
> these writes, do the checks in guest context, and only exit if the 
> TPR write would trigger an interrupt.

I suspect you aware of that this is about the io-apic not the local 
APIC. The local apic methods are already driver-ized - and they sit 
closer to the CPU so they matter more to performance.

> (kvm will likely gain x2apic support in 2.6.32; patches have 
> already been posted)

ok. This points in the direction of the io-apic driver abstraction 
from Jeremy being the right long-term approach. We already have a 
few quirks that could be cleaned up by using a proper driver 
interface.

        Ingo

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.