xen-ia64-devel
RE: [Xen-ia64-devel] vIOSAPIC and IRQs delivery
Tristan:
Although I believe event channel based vIRQ has better
performance than
previous patch, but this is not critical one. The key thing in my mind
is correctness
and how to support driver domain with IRQ sharing amongst domains.
Let me describe how current Xen handling a physical IRQ, I may
need to consult
Keir for double check :-)
When a PIRQ happens -> do_IRQ() in arch/x86/irq.c.
irq_desc->handler->ack(); // like normal Linux, op real
resource
If this is a guest IRQ, for each domain bund with this IRQ,
{ send event channel to inform guest;
desc->action->in_flight++; }
Done; // Notice: here there is no
irq_desc->handler->end()
Then when a guest received this event channel ->
evtchn_do_upcall()
For each found evtchn bund with PIRQ, do_IRQ()
do_IRQ is a normal linux function, but the hardware
interrupt type is pirq_type.
// See arch/xen/kernel/evtchn.c
Within do_IRQ, irq_desc->handler->ack() becomes ack_pirq(), it
does:
then mask_evtchn() and clear_evtchn() // no trap to xen
irq_desc->action->handler();
irq_desc->handler->end() becomes end_pirq(); It does:
unmask_evtchn();
if ( unmask notify is set ) pirq_unmask_notify();
// hypercall PHYSDEVOP_IRQ_UNMASK_NOTIFY
Done.
Here only non edge triggered IRQ need notify.
When the hypervisor received pirq_unmask_notify() hypercall.
if ( --desc->action->in_flight == 0 ) desc->handler->end(); I.e.
unmask_IO_APIC_irq()
// here the real IOAPIC.EOI is sent.
From above, the whole flow is clear, the mechanism to handle
sharing IRQ is managed
through irq_desc->action->in_flight. The guest only deals with event
channel.
The flow support IRQ line sharing pretty beautify and efficient IMO.
The event channel model in some case will request real IOSAPIC
operation base on the type
it is bund to. The software stack layer is very clear: 1: guest PIRQ
(top), 2: event channel (middle),
3: machine IRQ (bottom).
BTW, event channel is a pure software design, there is no
architecture dependency here. Guest
physical device is also same between IA64 and X86. If we make
difference, it is a pure software
approach difference.
Then let us see where the previous patch need to improve.
1: IRQ sharing is not supported. This feature, especially for huge Iron
like Itanium, is a must.
2: Sharing machine IOSAPIC resource to multiple guest introduces many
dangerous situation.
Example:
If DeviceA in DomX and Device B in DomY share IRQn, When domX
handle DeviceA IRQ (IRQn),
take the example of function in the patch like mask_irq:
s1:spin_lock_irqsave(&iosapic_lock, flags);
s2:xen_iosapic_write () // write RTE to disable
the IRQ in this line
s3:spin_unlock_irqrestore(&iosapic_lock, flags);
Here is the domX is switched out at S3, and DeviceB fire an IRQ
at that time. Due to the
disable in RTE, DomY can never response to the IRQ till DomA get
executed again and enable RTE.
This doesn't make sense for me.
3: Another major issue is that there is no easy way in future to add IRQ
sharing support
base on that patch. That is why I want to let hypervisor own IOSAPIC
exclusively, and guest
are purely based on software mechanism: Event channel.
4: More new hypercall is introduced and more call to hypervisor.
>The current ia64 model is well tested too and seems efficient too
(according
>to Dan measures).
Yes, Xen/IA64 can say having undergone some level of test although domU
is
still not that stable. But vIOSAPIC is totally new for VMs and is not
well tested.
On the other hand, the event channel based approach is well tested in
Xen
with real deployment by customer.
>> 3: Without the intermediate patch, can we run SMP guest?
>> My answer is yes at least for Xen0. Current approach
>> (dom0 own machine IOSAPIC should be OK for 1-2 month) will not block
>> those
> ongoing effort. vIRQ stuff is a cleanup for future driver domain
> support.
> Yes this is doable. I have to modify my current SMP-g patch because
> it is based on my previous vIOSAPIC.
I am sorry I gave comments not in the very beginning if it causes u
rework something.
I am in long holiday at that time.
No matter what patch is used finally, most SMP related IPIs should go
through event channel.
And the event channel code is always there even now no matter you call
it once or 100 times.
> My patch add an hypercall for each interruption handling to do EOI;
> with event channel the same is required (Xen must know when the
> domain has finished with IRQ handling. I don't think this can be
> avoided).
See previous description.
>> I don't know exactly IA64 HW implementation, but usually an level
>> triggered IRQ can be shared by multiple devices.
> Correct. But can be != done.
> I'd like to know an ia64 implementation with shared IRQs.
>
I want to leave this question to Tony as he know more than me in the
real platform.
My understanding is that an IOAPIC can only support certain amount of
IRQ lines
such as 24. A big system with more than 24 devices must share IRQ lines.
Today's
Xen already handle this, Itanium as an much higher end platform has no
way to disable
this feature.
Eddie
_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel
|
<Prev in Thread] |
Current Thread |
[Next in Thread>
|
- RE: [Xen-ia64-devel] vIOSAPIC and IRQs delivery, (continued)
- RE: [Xen-ia64-devel] vIOSAPIC and IRQs delivery, Magenheimer, Dan (HP Labs Fort Collins)
- RE: [Xen-ia64-devel] vIOSAPIC and IRQs delivery, Dong, Eddie
- RE: [Xen-ia64-devel] vIOSAPIC and IRQs delivery, Dong, Eddie
- RE: [Xen-ia64-devel] vIOSAPIC and IRQs delivery, Dong, Eddie
- RE: [Xen-ia64-devel] vIOSAPIC and IRQs delivery,
Dong, Eddie <=
- RE: [Xen-ia64-devel] vIOSAPIC and IRQs delivery, Dong, Eddie
- RE: [Xen-ia64-devel] vIOSAPIC and IRQs delivery, Magenheimer, Dan (HP Labs Fort Collins)
- RE: [Xen-ia64-devel] vIOSAPIC and IRQs delivery, Tian, Kevin
- RE: [Xen-ia64-devel] vIOSAPIC and IRQs delivery, Tian, Kevin
- RE: [Xen-ia64-devel] vIOSAPIC and IRQs delivery, Dong, Eddie
|
|
|