WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ia64-devel

Re: [Xen-ia64-devel] vIOSAPIC and IRQs delivery

To: "Dong, Eddie" <eddie.dong@xxxxxxxxx>, "Magenheimer, Dan \(HP Labs Fort Collins\)" <dan.magenheimer@xxxxxx>, <xen-ia64-devel@xxxxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-ia64-devel] vIOSAPIC and IRQs delivery
From: Tristan Gingold <Tristan.Gingold@xxxxxxxx>
Date: Tue, 7 Mar 2006 13:19:36 +0100
Delivery-date: Tue, 07 Mar 2006 12:16:39 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <26F44F810A51DF42A127BC2A06BE185E03D650CF@pdsmsx404>
List-help: <mailto:xen-ia64-devel-request@lists.xensource.com?subject=help>
List-id: Discussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
List-post: <mailto:xen-ia64-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=unsubscribe>
References: <26F44F810A51DF42A127BC2A06BE185E03D650CF@pdsmsx404>
Sender: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: KMail/1.5
Le Mardi 07 Mars 2006 11:38, Dong, Eddie a écrit :
> Tristan Gingold wrote:
> > Le Mardi 07 Mars 2006 00:34, Dong, Eddie a écrit :
> >> Magenheimer, Dan (HP Labs Fort Collins) wrote:
> >>> Hi Tristan --
> >>>
> >>> Do you have any more design information?  I'm not very
> >>> familiar with the x86 implementation but is it your intent
> >>> for it to be (nearly) identical?  What would be different?
> >>
> >> The difference is that should guest OS (para xen) still access the
> >> IOSAPIC MMIO port? If the guest OS keeps accessing the machine
> >> IOSAPIC MMIO address, multiple driver domain share same IRQ has
> >> potential problem. The design in my opnion is that hypervisor own
> >> the machine IOSAPIC resource exclusively including reading IVR and
> >> issuing CR.EOI. All the guest is working with a pure virtual IOSAPIC
> >> or virtual IO_APIC (actually doesn't matter for guest).
> >
> > [Note that IVR and CR.EOI are LSAPIC stuff.]
>
> So should we use a new term virtual IRQ or interrupt virtualization?
We can use vIRQ, which is different from VIRQ.

> Both LSAPIC and IOSAPIC need to be done in vIRQ.
Sure.

> BTW, RTE is still accessed by para-guest in previous patch :-)
Not directly, through Xen.
Do you really think x86 para-guest doesn't program io_apic ?
Only the driver know polarity/edge of an interrupt.  And this is programmed 
into the RTE.

> Writing of RTE in machine resource from one domain will
> impact the correctness of other domain if they share same
> IRQ line.
Not necessary.
If pol/edge are the same, there is no problem.
Otherwise, this is not possible.

> >>> Would all hardware I/O interrupts require queueing by
> >>> Xen in an event channel?  This seems like it could be
> >>> a potential high overhead performance issue.
> >
> > There are two things:
> > * delivery of IRQs through event channel.  I am not sure about
> > performance impact (should be almost the same).  I am sure about
> > linux modification impact (new files added, interrupt low-level
> > handling completly modified).
>
> I don't see too much Linux modifications here as most of  these files are
> already in xen. You can find them if you compile a X86 Xen, see
> linux/arch/xen/kernel/** , all those event channel related file are there
> including the PIRQ dispatching. In some sense, the whole IOSAPIC.c file is
> no longer a must.
Again, you need to set up RTEs.
Furthermore, I think we don't want to break transparent virtualization, so it 
won't be only drag and drop.

> > * Use of callback for event channel (instead of an IRQ).
> >   I suppose it should be slightly faster.  I suppose this is required
> > (for speed reasons) if we deliver IRQs through event-channel.
> >
> >> Mmm, I have different opnion here. With all guest physical IRQ
> >> queueing by Xen event channel through a bitmap that is shared in
> >> para-guest, the guest OS no longer needs to access IVR and EOI now,
> >> that means we don't need to trap into hypervisor. Checking the
> >> bitmap is defenitely higher performance than read IVR, in this way
> >> the performance is improved actually.
> >
> > I really think this is not that obvious due to hyper-privop and
> > hyper-reflexion.
>
> This is basically the difference between hypercall and using share memory.
> Hard to say the amount but benefits is clear, although as this code is
> frequently accessed especially for driver domain where there are a lot of
> IRQs.
According to Dan, using hyper-privop and hyper-reflexion, dom0 was only 1.7% 
slower than bare iron linux while compiling Linux.  Maybe this is not very IO 
intensive, but this is not that bad.  And maybe irr[] can be moved to share 
memory too.

> > Please start (maybe using some mails we have exchanged).  I will
> > complete if necessary.
>
> Yes, I have sent u some drafts.
I am working on it.

Tristan.


_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel