[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Xenhackthon] Virtualized APIC registers - virtual interrupt delivery.



On Wed, 29 May 2013, Zhang, Yang Z wrote:
> Stefano Stabellini wrote on 2013-05-29:
> > On Tue, 28 May 2013, Zhang, Yang Z wrote:
> >> Stefano Stabellini wrote on 2013-05-27:
> >>> On Mon, 27 May 2013, Zhang, Yang Z wrote:
> >>>> Konrad Rzeszutek Wilk wrote on 2013-05-24:
> >>>>> On Thu, May 23, 2013 at 08:25:06AM +0000, Zhang, Yang Z wrote:
> >>>>>> Jan Beulich wrote on 2013-05-23:
> >>>>>>>>>> On 22.05.13 at 18:21, Konrad Rzeszutek Wilk
> >>> <konrad.wilk@xxxxxxxxxx>
> >>>>>>> wrote:
> >>>>>>>> Which means that if this is set to be higher than the hypervisor
> >>>>>>>> timer or IPI callback the guest can run unbounded. Also it would
> >>>>>>>> seem that this value has to be often reset when migrating a guest
> >>>>>>>> between the pCPUs. And it would appear that this value is static.
> >>>>>>>> Meaning the guest only sets these vectors once and the hypervisor
> >>>>>>>> is responsible for managing the priority of that guest and other
> >>>>>>>> guests (say dom0) on the CPU.
> >>>>>>>> 
> >>>>>>>> For example, we have a guest with a 10gB NIC and the guest has
> >>>>>>>> decided to use vector 0x80 for it (assume a UP guest). Dom0 has
> >>>>>>>> an SAS controller and is using event number 30, 31, 32, and 33
> >>>>>>>> (there are only 4 PCPUS). The hypervisor maps them to be 0x58,
> >>>>>>>> 0x68, 0x78 and 0x88 and spreads those vectors on each pCPU. The
> >>>>>>>> guest is running on pCPU1 and there are two vectors - 0x80 and
> >>>>>>>> 0x58. The one assigned to the guest wins and dom0 SAS controller
> >>>>>>>> is preempted.
> >>>>>>>> 
> >>>>>>>> The solution for that seems to have some interaction with the
> >>>>>>>> guest when it allocates the vectors so that they are always below
> >>>>>>>> the dom0 priority vectors. Or hypervisor has to dynamically shuffle 
> >>>>>>>> its
> >>>>>>>> own vectors to be higher priority.
> >>>>>>>> 
> >>>>>>>> Or is there an guest vector <-> hypervisor vector lookup table that
> >>>>>>>> the CPU can use? So the hypervisor can say: the vector 0x80 in the
> >>>>>>>> guest actually maps to vector 0x48 in the hypervisor?
> >>>>>>> 
> >>>>>>> It is my understanding that the vector spaces are separate, and
> >>>>>>> hence guest interrupts can't block host ones (like the timer). Iirc
> >>>>>> Right. virtual interrupt delivery only for delivering guest virtual
> >>> interrupt(from
> >>>>> emulation device and assigned device.) which is located in guest's
> >>>>> vector space. It has nothing to do with other guest.
> >>> 
> >>> I think you mean "It has nothing to do with _the hypervisor_"?
> >> Yes. Both hypervisor and guest have separated vector space.
> >> 
> >>> 
> >>>>> OK, in which case Linux ~v2.6.32 (when the event callback mechanism was
> >>>>> introduced for HVM guests) will _not_ take advantage of this, right?
> >>>> Yes, event mechanism cannot benefit from it.
> >>> 
> >>> I think that Konrad was referring to the vector callback mechanism:
> >> You are right. What I want to say is vector callback mechanism.
> >> 
> >>> 
> >>> linux side  drivers/xen/events.c:xen_callback_vector
> >>> xen side    xen/arch/x86/hvm/irq.c:hvm_set_callback_via
> >>> 
> >>> Also see:
> >>> 
> >>> commit e5fd1f6505c43440bc2450253c79c80174b693bc
> >>> Author: Keir Fraser <keir.fraser@xxxxxxxxxx>
> >>> Date:   Tue May 25 11:28:58 2010 +0100
> >>> 
> >>>     x86 hvm: implement vector callback for evtchn delivery
> >>>     
> >>>     Signed-off-by: Sheng Yang <sheng@xxxxxxxxxxxxxxx>
> >>>     Signed-off-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
> >>>     Signed-off-by: Keir Fraser <keir.fraser@xxxxxxxxxx>
> >>> From the guest point of view it looks like a normal vector callback
> >>> (similar to an IPI).
> >>> 
> >>> 
> >>>>> Is there a way to solve this so that they _will_ take advantage of this.
> >>>> Perhaps not. virtual interrupt delivery relies on EOI logic to inject the
> > pending
> >>> interrupt. But event channel doesn't have such mechanism.
> >>> 
> >>> It's true that we don't do any EOIs with the vector callback mechanism,
> >>> the same way the operating system doesn't do any EOIs when it receives
> >>> an IPI.
> >> IPI also need EOI.
> > 
> > Ooops, you are right.
> > 
> > Does guest EOI still cause a trap into Xen?
> It depends on the bit in EOI exit bitmap. If it is set, then EOI still will 
> cause vmexit(EOI-induced vmexit). Otherwise, no vmexit happened.
> 
> The following pseudocode details the behavior of EOI virtualization:
> Vector â SVI;
> VISR[Vector] â 0;
> IF any bits set in VISR
> THEN SVI â highest index of bit set in VISR
> ELSE SVI â 0;
> FI;
> perform PPR virtualiation
> IF EOI_exit_bitmap[Vector] = 1 
> THEN cause EOI-induced VM exit with Vector as exit qualification;
> ELSE evaluate pending virtual interrupts; 
> FI;

Thanks for the explanation.

At this point I wonder: would vector callbacks, that doesn't do any
guest EOIs, create any problems to this new virtual interrupt delivery
mechanism?
If the guest does not do any EOIs after receiving a vector callback,
then other pending interrupts are never evaluated (the last ELSE
condition in your pseudocode cannot happen), is that correct?

In any case we could consider introducing an ack_APIC_irq() call at the
beginning of xen_evtchn_do_upcall, so that the vector callback mechanism
can take advantage of posted interrupts too.
Of course we would do that only if posted interrupts are available.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.