Le Jeudi 09 Février 2006 11:05, Dong, Eddie a écrit :
> Tristan Gingold wrote:
> > Le Vendredi 03 Février 2006 17:13, Alex Williamson a écrit :
> >> On Fri, 2006-02-03 at 09:33 +0100, Tristan Gingold wrote: [...]
> >> I agree that we can't hit this problem right now, but it's easy to
> >> fix and would be one less thing we might miss when we do enable
> >> driver domains. It looks the block of code to mask the vector could
> >> be copied identically into the section to unmask the vector with the
> >> appropriate s/mask_vec/unmask_vec and setting of the rte values. I
> >> guess it keeps catching my eye because the mask and unmask are not
> >> symmetric. Thanks, Hi,
> >
> > I have slightly modified the patch so that it looks almost symmetric.
> >
> > Thanks,
> > Tristan.
>
> Tristan:
> Great work! And sorry I don't find time to go through all.
> A quick question is that why we need to do cpu_wake() immediately after
> IRQ injection? In Xen design, this API is mainly used for VP pause/unpause
> and manual ops that is reasonable to disturb/bypass the scheduler decision.
> This disturb is heavily costed as the scheduler triggered in the next time
> tick will go back to its normal decision tree that probably means
> preemption of dom0 quantum. What X86 did is to wait for the scheduler to
> take the decision. I know the original code also do in this way, but it is
> not an architecture requirement. Rather it is a shortcut in previous
> implementation, and I think it is time to revise now.
Yes I have just copied from the original code.
However, we should also take IPI into consideration (unless we go directly to
event channel).
So for sure this point has to be revisited.
> +xen_reflect_interrupt_to_domains (ia64_vector vector)
> +{
> + struct iosapic_intr_info *info = &iosapic_intr_info[vector];
> + struct iosapic_rte_info *rte;
> + int res = 1;
> +
> + list_for_each_entry(rte, &info->rtes, rte_list) {
> + if (rte->vcpu != NULL) {
> + if (rte->vcpu == VCPU_XEN)
> + res = 0;
> + else {
> + /* printf ("Send %d to vcpu as %d\n",
> + vector, rte->vec); */
> + /* FIXME: vcpus should be really
> + interrupted. This should currently works
> + because only domain0 receive interrupts and
> + domain0 runs on CPU#0, which receives all
> + the interrupts... */
> + vcpu_pend_interrupt(rte->vcpu, rte->vcpu_vec);
> + vcpu_wake(rte->vcpu);
> + }
>
> Another minor comments are:
> 1: +#define VCPU_XEN ((struct vcpu *)1) looks stranger for me.
> Further more, I'd like to put a bit in RTE indicating ownership
> of IRQ,
> anything else you considered?
Ownership is indicated by the vcpu field.
At this time, VCPU_XEN is not used. It could be in the future to indicate the
IRQ is owned by Xen itself.
2: Similar with #1, checking IRQ vector (if
> (vector == IA64_TIMER_VECTOR || vector == IA64_IPI_VECTOR)) in following
> code is too "hardcode". Today we only have 2 IRQs in hypervisor, but
> actually we need more such as platform management interrupt like Alex
> mentioned previously for hotplug, thermal sensor IRQ. So we don't want to
> see a long list of check here. My suggestion is to adopt similar mechanism
> with X86. I.e. like __do_IRQ_guest in arch/x86/irq.c, the detail
> implementation can be architecture dependant like x86 use "desc->status &
> IRQ_GUEST" but we may not.
Ok.
> Anyway, keep the capability that a machine IRQ
> may be bound to multiple guest like X86 did today is better and it is not
> so difficult. you may also be able to reuse some code there :-)
To be added on my TODO list, since we can't trigger such a case or test it
now.
Thank you for your comments,
Tristan.
_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel
|