[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 04/10] xen/arm: set GICH_HCR_UIE if all the LRs are in use



On Fri, 21 Mar 2014, Ian Campbell wrote:
> On Wed, 2014-03-19 at 12:31 +0000, Stefano Stabellini wrote:
> > On return to guest, if there are no free LRs and we still have more
> > interrupt to inject, set GICH_HCR_UIE so that we are going to receive a
> > maintenance interrupt when no pending interrupts are present in the LR
> > registers.
> > The maintenance interrupt handler won't do anything anymore, but
> > receiving the interrupt is going to cause gic_inject to be called on
> > return to guest that is going to clear the old LRs and inject new
> > interrupts.
> 
> Can you add a comment to the (now dummy) interrupt handler explaining
> this please.

Sure


> > Signed-off-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
> > 
> > ---
> > 
> > Changes in v2:
> > - disable/enable the GICH_HCR_UIE bit in GICH_HCR;
> > - only enable GICH_HCR_UIE if this_cpu(lr_mask) == ((1 << nr_lrs) - 1).
> > ---
> >  xen/arch/arm/gic.c |    6 ++++++
> >  1 file changed, 6 insertions(+)
> > 
> > diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> > index 32d3bea..d445e8b 100644
> > --- a/xen/arch/arm/gic.c
> > +++ b/xen/arch/arm/gic.c
> > @@ -790,6 +790,12 @@ void gic_inject(void)
> >          vgic_vcpu_inject_irq(current, current->domain->arch.evtchn_irq);
> >  
> >      gic_restore_pending_irqs(current);
> > +
> > +    if ( !list_empty(&current->arch.vgic.lr_pending) &&
> > +         this_cpu(lr_mask) == ((1 << nr_lrs) - 1) )
> 
> Helper like lr_all_full?

Good idea


> > +        GICH[GICH_HCR] |= GICH_HCR_UIE;
> > +    else
> > +        GICH[GICH_HCR] &= ~GICH_HCR_UIE;
> >  }
> >  
> >  int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.