This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] [RFC] Pass-through Interdomain Interrupts Sharing(HVM/Do

To: "Tian, Kevin" <kevin.tian@xxxxxxxxx>, Guy Zana <guy@xxxxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] [RFC] Pass-through Interdomain Interrupts Sharing(HVM/Dom0)
From: Keir Fraser <keir@xxxxxxxxxxxxx>
Date: Fri, 10 Aug 2007 08:37:16 +0100
Cc: Alex Novik <alex@xxxxxxxxxxxx>
Delivery-date: Fri, 10 Aug 2007 00:33:55 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <D470B4E54465E3469E2ABBC5AFAC390F013B20B2@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-topic: [Xen-devel] [RFC] Pass-through Interdomain Interrupts Sharing(HVM/Dom0)
User-agent: Microsoft-Entourage/
On 10/8/07 08:15, "Tian, Kevin" <kevin.tian@xxxxxxxxx> wrote:

>> My thought here is a simple priority list with move-to-back of the
>> frontmost
>> domain when we deliver him the interrupt but he does not deassert the
>> line
>> either in reasonable time or by the time he EOIs the interrupt. This is
>> simple generic logic needing no PV guest changes.
>> -- Keir
> How is the priority defined?

It is defined dynamically by the move-to-back policy of the priority list.

> What's reasonable time for different device requirement?

For the timeout? Actually I'm not sure how important having a timeout
actually is -- unless in the worst case it can reset the PCI device and
ensure the line is quiesced in that way. Otherwise a non-responsive guest is
unlikely to deassert its device and hence you cannot timeout and re-enable
the interrupt line anyway. I consider this to be a secondary issue in
implementing shared interrupts, and can reasonably be left until later.

> PV irq sharing takes response from all shared side, and Guy's RFC
> only takes dom0's response. Now your suggestion is much simpler
> toward timeout only, but what do you expect the final performance
> to be?

The timeout isn't part of this method's normal operation. The usual case
will be that we deliver to just one guest -- at the front of our priority
list -- and it was the correct single guest to deliver the interrupt to. In
which case the list does not change, and if using the polarity-change method
from Neocleus we would take the usual two interrupts per device assertion
(one on +ve edge, one on -ve edge), or just one interrupt if we use the
existing Xen late-EOI method or Intel's dummy-EOI method.

We take potentially two interrupts if the highest-prio domain is not the
service domain for this particular interrupt. In this case we move domain to
back of list and continue to deliver until the line is deasserted. Neocleus
polarity-change method works really nicely here because we take no second
interrupt until the physical INTx line is actually deasserted (and hence the
interrupt is serviced,a nd our delivery algorithm hence terminates). Using
Xen/Intel methods of EOI'ing we have to somehow detect the immediate
re-interrupt on EOI (which will happen because the physical INTx line is
still asserted)

Worst case is where multiple devices are issuing interrupts simultaneously,
of course. In this case we do truely *need* to issue the interrupt to
multiple guests. This will work, but be a bit slow. I think this is true of
the Neocleus algorithm too, however.

In conclusion, my algorithm works well when I run through it in my head. :-)

 -- Keir

Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>