xen-ia64-devel
Re: [Xen-ia64-devel] vIOSAPIC and IRQs delivery
Le Mercredi 08 Mars 2006 14:44, Dong, Eddie a écrit :
> Tristan:
> Although I believe event channel based vIRQ has better
> performance than previous patch, but this is not critical one. The key
> thing in my mind is correctness
> and how to support driver domain with IRQ sharing amongst domains.
> Let me describe how current Xen handling a physical IRQ, I may
> need to consult
> Keir for double check :-)
> When a PIRQ happens -> do_IRQ() in arch/x86/irq.c.
[PIRQ description. I agree with it]
> Here only non edge triggered IRQ need notify.
> When the hypervisor received pirq_unmask_notify() hypercall.
> if ( --desc->action->in_flight == 0 ) desc->handler->end(); I.e.
> unmask_IO_APIC_irq()
> // here the real IOAPIC.EOI is sent.
>
>
>
> From above, the whole flow is clear, the mechanism to handle
> sharing IRQ is managed
> through irq_desc->action->in_flight. The guest only deals with event
> channel.
> The flow support IRQ line sharing pretty beautify and efficient IMO.
Yes, IRQ line sharing is correctly handled. My patch didn't allow this
between domains.
However I think IRQ handling and IRQ delivering is two different things:
ie we can keep this scheme using current IRQ delivery.
> The event channel model in some case will request real IOSAPIC
> operation base on the type
> it is bund to. The software stack layer is very clear: 1: guest PIRQ
> (top), 2: event channel (middle),
> 3: machine IRQ (bottom).
> BTW, event channel is a pure software design, there is no
> architecture dependency here.
I don't wholy agree. The callback entry is written in assembly, and seems to
have tricks.
> Guest
> physical device is also same between IA64 and X86. If we make
> difference, it is a pure software approach difference.
Yes but Linux implementation differe.
> Then let us see where the previous patch need to improve.
> 1: IRQ sharing is not supported. This feature, especially for huge Iron
> like Itanium, is a must.
I agree. However we won't reach this problem now as device drivers do not
exist yet.
> 2: Sharing machine IOSAPIC resource to multiple guest introduces many
> dangerous situation.
> Example:
> If DeviceA in DomX and Device B in DomY share IRQn, When domX
> handle DeviceA IRQ (IRQn),
> take the example of function in the patch like mask_irq:
> s1:spin_lock_irqsave(&iosapic_lock, flags);
> s2:xen_iosapic_write () // write RTE to disable
> the IRQ in this line
> s3:spin_unlock_irqrestore(&iosapic_lock, flags);
> Here is the domX is switched out at S3, and DeviceB fire an IRQ
> at that time. Due to the
> disable in RTE, DomY can never response to the IRQ till DomA get
> executed again and enable RTE.
> This doesn't make sense for me.
Neither for me.
However my patch do not allow this behavior: once an IRQ is allocated by a
domain, it can't be modified by another one. Again I agree this is far from
perfect and using an in_flight mechanism is better.
> 3: Another major issue is that there is no easy way in future to add IRQ
> sharing support
> base on that patch. That is why I want to let hypervisor own IOSAPIC
> exclusively, and guest
> are purely based on software mechanism: Event channel.
I don't think IRQ sharing requires event channel. This can also be done
using current IRQ delivery.
> 4: More new hypercall is introduced and more call to hypervisor.
Only physdev_op hypercall is added, but it is also used in x86 to set up
IOAPIC. You can't avoid it.
Additionnal calls to hypervisor are for reading or writting IVR, EOI and TPR.
I really think this is fast using hyper-privop.
> >The current ia64 model is well tested too and seems efficient too
> > (according to Dan measures).
>
> Yes, Xen/IA64 can say having undergone some level of test although domU
> is still not that stable.
Maybe because domU do not have pirqs :-)
> But vIOSAPIC is totally new for VMs and is not well tested.
Whatever we do Xen will control IOSAPICs. For sure my patch is not well
tested, but simple enough.
> On the other hand, the event channel based approach is well tested in
> Xen with real deployment by customer.
Correct but it won't drap and drop on ia64.
> >> 3: Without the intermediate patch, can we run SMP guest?
> >> My answer is yes at least for Xen0. Current approach
> >> (dom0 own machine IOSAPIC should be OK for 1-2 month) will not block
> >> those
> >
> > ongoing effort. vIRQ stuff is a cleanup for future driver domain
> > support.
> > Yes this is doable. I have to modify my current SMP-g patch because
> > it is based on my previous vIOSAPIC.
>
> I am sorry I gave comments not in the very beginning if it causes u
> rework something.
> I am in long holiday at that time.
No problem.
> No matter what patch is used finally, most SMP related IPIs should go
> through event channel.
Next debate :-)
> And the event channel code is always there even now no matter you call
> it once or 100 times.
Yes but event channel is not yet bound to IRQs.
> > My patch add an hypercall for each interruption handling to do EOI;
> > with event channel the same is required (Xen must know when the
> > domain has finished with IRQ handling. I don't think this can be
> > avoided).
>
> See previous description.
This is 'hypercall PHYSDEVOP_IRQ_UNMASK_NOTIFY'.
While I was working on vIOSAPIC I really read x86 code. Be sure about that.
> >> I don't know exactly IA64 HW implementation, but usually an level
> >> triggered IRQ can be shared by multiple devices.
> >
> > Correct. But can be != done.
> > I'd like to know an ia64 implementation with shared IRQs.
>
> I want to leave this question to Tony as he know more than me in the
> real platform.
> My understanding is that an IOAPIC can only support certain amount of
> IRQ lines
> such as 24. A big system with more than 24 devices must share IRQ lines.
> Today's
> Xen already handle this, Itanium as an much higher end platform has no
> way to disable this feature.
Hey, Tiger 4 has 4 IOSAPICs:
(XEN) ACPI: IOSAPIC (id[0x0] address[00000000fec00000] gsi_base[0])
(XEN) ACPI: IOSAPIC (id[0x1] address[00000000fec10000] gsi_base[24])
(XEN) ACPI: IOSAPIC (id[0x2] address[00000000fec20000] gsi_base[48])
(XEN) ACPI: IOSAPIC (id[0x3] address[00000000fec30000] gsi_base[72])
It can handle 4*24=96 IRQs without sharing them. Unless I am wrong, a PCI
slot has 4 interrupts. So you can have up to 24 slots (minus internal
devices).
Tristan.
_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel
|
<Prev in Thread] |
Current Thread |
[Next in Thread>
|
- Re: [Xen-ia64-devel] vIOSAPIC and IRQs delivery, (continued)
- RE: [Xen-ia64-devel] vIOSAPIC and IRQs delivery, Dong, Eddie
- RE: [Xen-ia64-devel] vIOSAPIC and IRQs delivery, Dong, Eddie
- RE: [Xen-ia64-devel] vIOSAPIC and IRQs delivery, Dong, Eddie
- RE: [Xen-ia64-devel] vIOSAPIC and IRQs delivery, Dong, Eddie
- RE: [Xen-ia64-devel] vIOSAPIC and IRQs delivery, Dong, Eddie
- RE: [Xen-ia64-devel] vIOSAPIC and IRQs delivery, Magenheimer, Dan (HP Labs Fort Collins)
- RE: [Xen-ia64-devel] vIOSAPIC and IRQs delivery, Tian, Kevin
- RE: [Xen-ia64-devel] vIOSAPIC and IRQs delivery, Tian, Kevin
- RE: [Xen-ia64-devel] vIOSAPIC and IRQs delivery, Dong, Eddie
- RE: [Xen-ia64-devel] vIOSAPIC and IRQs delivery, Luck, Tony
|
|
|