WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] [RFC] Pass-through Interdomain Interrupts Sharing(HVM/Do

To: "Keir Fraser" <keir@xxxxxxxxxxxxx>, "Guy Zana" <guy@xxxxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-devel] [RFC] Pass-through Interdomain Interrupts Sharing(HVM/Dom0)
From: "Tian, Kevin" <kevin.tian@xxxxxxxxx>
Date: Fri, 10 Aug 2007 16:02:28 +0800
Cc: Alex Novik <alex@xxxxxxxxxxxx>
Delivery-date: Fri, 10 Aug 2007 01:02:58 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <C2E1D43C.C5DA%keir@xxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <D470B4E54465E3469E2ABBC5AFAC390F013B20B2@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <C2E1D43C.C5DA%keir@xxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcfarSF45lhECFFTQSiE3oBAG7IKmQAbyytWAAAcBIUAADDsMAAA8Q6JAAAVuSA=
Thread-topic: [Xen-devel] [RFC] Pass-through Interdomain Interrupts Sharing(HVM/Dom0)
>From: Keir Fraser [mailto:keir@xxxxxxxxxxxxx]
>Sent: 2007年8月10日 15:37
>> How is the priority defined?
>
>It is defined dynamically by the move-to-back policy of the priority list.

Considering the sharing between high-speed device and low-speed 
device, simple move-to-back policy (once EOI) is not most efficient. 
At least we can take interrupt frequency as one factor of priority too.

>
>> What's reasonable time for different device requirement?
>
>For the timeout? Actually I'm not sure how important having a timeout
>actually is -- unless in the worst case it can reset the PCI device and
>ensure the line is quiesced in that way. Otherwise a non-responsive
>guest is
>unlikely to deassert its device and hence you cannot timeout and
>re-enable
>the interrupt line anyway. I consider this to be a secondary issue in
>implementing shared interrupts, and can reasonably be left until later.
>

Seems you are talking about a bogus case where guest is not willing
to handle the injection (like driver unload) but leaves device in assertion 
state. Yes, for such bogus condition happen, there's nothing to do 
except disabling the physical RTE.

While my question is about the efficiency of timeout under different 
condition. Say the top of the list is HVM domain at the time, and 
HVM domain has vRTE masked (driver unload, or previous injection is 
in handle), in this case we may not want to inject now and wait same 
'reasonable time' for non-response and instead move-to-back can 
make effect immediately.

>> PV irq sharing takes response from all shared side, and Guy's RFC
>> only takes dom0's response. Now your suggestion is much simpler
>> toward timeout only, but what do you expect the final performance
>> to be?
>
>The timeout isn't part of this method's normal operation. The usual case
>will be that we deliver to just one guest -- at the front of our priority
>list -- and it was the correct single guest to deliver the interrupt to. In

This is hard to tell, since no clue to check whether it's right one due 
to randomness of interrupt occurrence. 

>
>Worst case is where multiple devices are issuing interrupts
>simultaneously,
>of course. In this case we do truely *need* to issue the interrupt to
>multiple guests. This will work, but be a bit slow. I think this is true of
>the Neocleus algorithm too, however.
>
>In conclusion, my algorithm works well when I run through it in my
>head. :-)
>

Definitely, this is a workable approach and can be applied to both 
solutions. My concern is just how it behaves considering performance. :-)

Thanks,
Kevin

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>