Let me put small comments before US guys call it a day.
Tian, Kevin wrote:
>> From: Magenheimer, Dan (HP Labs Fort Collins)
>> Sent: 2006年3月10日 2:45
>>
>> 1) Interrupts may happen at a rate of tens of thousands
>> per second. Just like all high frequency CPU operations
>> are coded with "fast paths" (hyperprivops and hyperreflection),
>> I think interrupt reflection (some call it injection)
>> needs to be implemented with a fast path. Unlike the
>> CPU ops, there is currently no fast path for external
>> interrupt reflection, though many of the CPU ops that
>> a guest performs (e.g. ivr, eoi, tpr) DO have fast paths.
>
> Yes, interrupt should be implemented with high performance.
Yes, this is the benefit of event channel based solution. With more frequent the
IRQ happens, event channel has more obvious performance gain. As TPR/IVR
is done in Xen already in event channel based solution, then no/less ring
crossing after the guest services IRQ.
Reflecting a IRQ to guest on event channel based solution is just setting a bit
in shared memory and they are batched (the guest will not immediately get
scheduled),
similar with vIRR setting.
Meanwhile, the guest will batch extract these information from the shared
memory
and call do_IRQ one by one. The mechanism is almost same with linux/IA64 IRQ
handling (read IVR and batch handle it).
>
>>
>> When an external interrupt arrives (e.g. Xen is executing
>> starting at IVT+3000), the vast majority of interrupts
>> should be able to be reflected or recorded using a fast
>> path. This is much harder to do with event channels than
>> by setting a bit in a hyperregister. (Sure you could
>> rewrite all the event channel code in assembly, but
>> then what is the point of sharing the C code?)
>
> For this point, I think two paths (by event channel and by interrupt)
> are similar. Once receiving a device interrupt, xen hypervisor goes
> to save cpu state, read ivr, and then jumps to C handler. Then C
> handler (ia64_handle_irq) checks whether that interrupt is owned
> by guest. If yes:
> - Set pending bit in vIRR, and then resume to interrupt handler
> of guest (current behavior), or
> - Set pending bit in evtchn_pending (yes, only one extra array
> lookup), and then resume to callback of guest
>
> In this case, callback is mostly like the interrupt handler of
> xenlinux, with difference that one for event and another for
> interrupt. So I didn't see more difficulty for event channel on this
> case.
>
>>
>> 2) Eddie commented that all the event channel code is already
>> used in Xenlinux/ia64. Not true. There is a separate
>> file (evtchn_ia64.c) that is used instead.
>
> OK, maybe I should say that's result instead of current status. Once
> we turn to event channel mechanism like x86, the common evtchn.c
> will be reusable by ia64 no change. Previously I already sent out a
> patch to make a small cleanup for evtchn.c
:-) Yes, the current difference is partly because P2M/VP is not in, also maybe
partly because event channel is still base on pseudo IRQ now.
Base on xen summit, this is changing, after this change, say Q1, they will be
same (i.e. no IA64 special).
>
>>
>> 3) I don't think we should be trying to support machines and
>> configurations that Linux is not even yet able to support
>> adequately. We have plenty of work to do to get Xen/ia64
>> usable. And sharing IRQs between driver domains may be
>> necessary eventually, but it doesn't seem a huge restriction
>> in the short term to not allow different driver domains to share
>> the same IRQ.
>>
>
> I tempt to agree that we may do this complete mechanism step
> by step. For example, we may request Tristan to slim his patch
> first to only contain consensus logic like moving IOSAPIC from
> dom0 to Xen. However he has to hold lines for driver domains like
> RTE sharing, since that part is still in discussion. Also he needs to
> address previous comments on the list about the coding styles.
>
> Then if the new patch is clean enough, it may go in first with
> discussion on rest stuff on-going.
Good suggestion!
>
> Thanks,
> Kevin
Thanks, Eddie
_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel
|