[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v7 11/12] xen/arm: gic_events_need_delivery and irq priorities



On Tue, 2014-04-08 at 16:12 +0100, Stefano Stabellini wrote:
> gic_events_need_delivery should only return positive if an outstanding
> pending irq has an higher priority than the currently active irq and the
> priority mask.
> Introduce GIC_IRQ_GUEST_ACTIVE to track which one is the currently
> active guest irq.

There can be multiple such interrupt, can't there? In which case "which
ones are the currently active guest irqs" or "which IRQs are currently
active" or something.

> Rewrite the function by going through the priority ordered inflight and
> lr_queue lists.
> 
> In gic_restore_pending_irqs replace lower priority pending (and not
> active) irqs in GICH_LRs with higher priority irqs if no more GICH_LRs
> are available.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
> 
> ---
> 
> Changes in v7:
> - fix locking for the list_empty case in gic_restore_pending_irqs;
> - add in code comment;
> - gic_events_need_delivery: break out of the loop as soon as we find the
> active irq as inflight_irqs is ordered by priority;
> - gic_events_need_delivery: break out of the loop if p->priority is
> lower than mask_priority as inflight_irqs is ordered by priority;
> - use find_next_zero_bit instead of find_first_zero_bit;
> - in gic_restore_pending_irqs remember the last position of the inner
> loop search and continue from there;
> - in gic_restore_pending_irqs use a priority check to get out of the
> inner loop.
> 
> Changes in v5:
> - improve in code comments;
> - use list_for_each_entry_reverse instead of writing my own list walker.
> 
> Changes in v4:
> - in gic_events_need_delivery go through inflight_irqs and only consider
> enabled irqs.
> ---
>  xen/arch/arm/gic.c           |   84 
> ++++++++++++++++++++++++++++++++++++++----
>  xen/include/asm-arm/domain.h |    5 ++-
>  xen/include/asm-arm/gic.h    |    3 ++
>  3 files changed, 82 insertions(+), 10 deletions(-)
> 
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index e6e6f1a..9295ccf 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -721,6 +721,7 @@ static void gic_update_one_lr(struct vcpu *v, int i)
>      p = irq_to_pending(v, irq);
>      if ( lr & GICH_LR_ACTIVE )
>      {
> +        set_bit(GIC_IRQ_GUEST_ACTIVE, &p->status);
>          /* HW interrupts cannot be ACTIVE and PENDING */
>          if ( p->desc == NULL &&
>               test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) &&
> @@ -735,6 +736,7 @@ static void gic_update_one_lr(struct vcpu *v, int i)
>          if ( p->desc != NULL )
>              p->desc->status &= ~IRQ_INPROGRESS;
>          clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
> +        clear_bit(GIC_IRQ_GUEST_ACTIVE, &p->status);
>          p->lr = GIC_INVALID_LR;
>          if ( test_bit(GIC_IRQ_GUEST_QUEUED, &p->status) &&
>                  test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
> @@ -763,22 +765,53 @@ void gic_clear_lrs(struct vcpu *v)
>  
>  static void gic_restore_pending_irqs(struct vcpu *v)
>  {
> -    int i;
> -    struct pending_irq *p, *t;
> +    int i = 0, lrs = nr_lrs;
> +    struct pending_irq *p, *t, *p_r;
> +    struct list_head *inflight_r;
>      unsigned long flags;
>  
> +    spin_lock_irqsave(&v->arch.vgic.lock, flags);
> +
> +    if ( list_empty(&v->arch.vgic.lr_pending) )
> +        goto out;
> +
> +    inflight_r = &v->arch.vgic.inflight_irqs;
>      list_for_each_entry_safe ( p, t, &v->arch.vgic.lr_pending, lr_queue )
>      {
> -        i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs);
> -        if ( i >= nr_lrs ) return;
> +        i = find_next_zero_bit(&this_cpu(lr_mask), nr_lrs, i);

While you are rewriting this then renaming the varialbe i to "lr" would
be a lot clearer.

> +        if ( i >= nr_lrs )
> +        {
> +            /* No more free LRs: find a lower priority irq to evict */
> +            list_for_each_entry_reverse( p_r, inflight_r, inflight )
> +            {
> +                inflight_r = &p_r->inflight;
> +                if ( p_r->priority == p->priority )
> +                    goto out;
> +                if ( test_bit(GIC_IRQ_GUEST_VISIBLE, &p_r->status) &&
> +                     !test_bit(GIC_IRQ_GUEST_ACTIVE, &p_r->status) )
> +                    goto found;
> +            }

Please can you add a comment here:
                /* We didn't find a victim this time, and we won't next time, 
so quit */

> +            goto out;
> +
> +found:
> +            i = p_r->lr;
> +            p_r->lr = GIC_INVALID_LR;
> +            set_bit(GIC_IRQ_GUEST_QUEUED, &p_r->status);
> +            clear_bit(GIC_IRQ_GUEST_VISIBLE, &p_r->status);
> +            gic_add_to_lr_pending(v, p_r);
> +        }
>  
> -        spin_lock_irqsave(&v->arch.vgic.lock, flags);
>          gic_set_lr(i, p, GICH_LR_PENDING);
>          list_del_init(&p->lr_queue);
>          set_bit(i, &this_cpu(lr_mask));
> -        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
> +
> +        lrs--;
> +        if ( lrs == 0 )
> +            break;
>      }
>  
> +out:
> +    spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
>  }
>  
>  void gic_clear_pending_irqs(struct vcpu *v)
> @@ -794,8 +827,43 @@ void gic_clear_pending_irqs(struct vcpu *v)
>  
>  int gic_events_need_delivery(void)
>  {
> -    return (!list_empty(&current->arch.vgic.lr_pending) ||
> -            this_cpu(lr_mask));
> +    int mask_priority, lrs = nr_lrs;
> +    int max_priority = 0xff, active_priority = 0xff;
> +    struct vcpu *v = current;
> +    struct pending_irq *p;
> +    unsigned long flags;
> +
> +    mask_priority = (GICH[GICH_VMCR] >> GICH_VMCR_PRIORITY_SHIFT) & 
> GICH_VMCR_PRIORITY_MASK;

mask_priority is a bit meaningless (I know the docs call it
priority_mask), but its the vcpus current priority, right?

Also, by adding a << 3 here then I think the rest of the logic reads
more easily, because you are then using the priority directly, and
ignoring the fact that VMCR has a limited precision. Whereas with the
shift >> 3 at the comparison sights you kind of have to think about it
in each case.

> +
> +    spin_lock_irqsave(&v->arch.vgic.lock, flags);
> +
> +    /* TODO: We order the guest irqs by priority, but we don't change
> +     * the priority of host irqs. */
> +    list_for_each_entry( p, &v->arch.vgic.inflight_irqs, inflight )
> +    {
> +        if ( test_bit(GIC_IRQ_GUEST_ACTIVE, &p->status) )
> +        {
> +            if ( p->priority < active_priority )
> +                active_priority = p->priority;
> +            break;
> +        } else if ( test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) ) {
> +            if ( p->priority < max_priority )
> +                max_priority = p->priority;
> +        }
> +        if ( (p->priority >> 3) >= mask_priority )
> +            break;

This lrs-- stuff needs a comment. I think along the lines of only the
first nr_lrs interrupt need to be considered because XXX.

Not sure what XXX is -- if we have 4 lrs and inflight_irqs has 10
entries on it (say) of which the first 5 happen to be masked, don't we
want to keep going until we've looked at more than the first 4 entries
or something? Or should this decrement be conditional on ENABLE and/or
ACTIVE perhaps?

You exit the loop as soon as you see an ACTIVE IRQ, but can't you also
exit if you see an ENABLED IRQ? If you see an enabled IRQ but you
haven't yet seen an active one then don't you know that max_priority <
active_priority? (this might be best discussed around a whiteboard...)


> +        lrs--;
> +        if ( lrs == 0 )
> +            break;
> +    }
> +
> +    spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
> +
> +    if ( max_priority < active_priority &&
> +         (max_priority >> 3) < mask_priority )
> +        return 1;
> +    else
> +        return 0;
>  }
>  
>  void gic_inject(void)

Ian


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.