|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v10 03/32] ARM: vGIC: move irq_to_pending() calls under the VGIC VCPU lock
On Tue, 30 May 2017, Julien Grall wrote:
> Hi Andre,
>
> On 26/05/17 18:35, Andre Przywara wrote:
> > So far irq_to_pending() is just a convenience function to lookup
> > statically allocated arrays. This will change with LPIs, which are
> > more dynamic, so the memory for their struct pending_irq might go away.
> > The proper answer to the issue of preventing stale pointers is
> > ref-counting, which requires more rework and will be introduced with
> > a later rework.
> > For now move the irq_to_pending() calls that are used with LPIs under the
> > VGIC VCPU lock, and only use the returned pointer while holding the lock.
> > This prevents the memory from being freed while we use it.
> > For the sake of completeness we take care about all irq_to_pending()
> > users, even those which later will never deal with LPIs.
> >
> > Signed-off-by: Andre Przywara <andre.przywara@xxxxxxx>
> > ---
> > xen/arch/arm/gic.c | 5 ++++-
> > xen/arch/arm/vgic.c | 39 ++++++++++++++++++++++++++++++---------
> > 2 files changed, 34 insertions(+), 10 deletions(-)
> >
> > diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> > index da19130..dcb1783 100644
> > --- a/xen/arch/arm/gic.c
> > +++ b/xen/arch/arm/gic.c
> > @@ -402,10 +402,13 @@ static inline void gic_add_to_lr_pending(struct vcpu
> > *v, struct pending_irq *n)
> >
> > void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq)
> > {
> > - struct pending_irq *p = irq_to_pending(v, virtual_irq);
> > + struct pending_irq *p;
> > unsigned long flags;
> >
> > spin_lock_irqsave(&v->arch.vgic.lock, flags);
> > +
> > + p = irq_to_pending(v, virtual_irq);
> > +
> > if ( !list_empty(&p->lr_queue) )
> > list_del_init(&p->lr_queue);
> > spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
> > diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> > index 54b2aad..69d732b 100644
> > --- a/xen/arch/arm/vgic.c
> > +++ b/xen/arch/arm/vgic.c
> > @@ -234,23 +234,29 @@ static int vgic_get_virq_priority(struct vcpu *v,
> > unsigned int virq)
> > bool vgic_migrate_irq(struct vcpu *old, struct vcpu *new, unsigned int irq)
> > {
> > unsigned long flags;
> > - struct pending_irq *p = irq_to_pending(old, irq);
> > + struct pending_irq *p;
> > +
> > + spin_lock_irqsave(&old->arch.vgic.lock, flags);
> > +
> > + p = irq_to_pending(old, irq);
> >
> > /* nothing to do for virtual interrupts */
> > if ( p->desc == NULL )
> > + {
> > + spin_unlock_irqrestore(&old->arch.vgic.lock, flags);
> > return true;
> > + }
> >
> > /* migration already in progress, no need to do anything */
> > if ( test_bit(GIC_IRQ_GUEST_MIGRATING, &p->status) )
> > {
> > gprintk(XENLOG_WARNING, "irq %u migration failed: requested while
> > in progress\n", irq);
> > + spin_unlock_irqrestore(&old->arch.vgic.lock, flags);
> > return false;
> > }
> >
> > perfc_incr(vgic_irq_migrates);
> >
> > - spin_lock_irqsave(&old->arch.vgic.lock, flags);
> > -
> > if ( list_empty(&p->inflight) )
> > {
> > irq_set_affinity(p->desc, cpumask_of(new->processor));
> > @@ -285,6 +291,13 @@ void arch_move_irqs(struct vcpu *v)
> > struct vcpu *v_target;
> > int i;
> >
> > + /*
> > + * We don't migrate LPIs at the moment.
> > + * If we ever do, we must make sure that the struct pending_irq does
> > + * not go away, as there is no lock preventing this here.
> > + */
> > + ASSERT(!is_lpi(vgic_num_irqs(d) - 1));
>
> Hmmm? This raise a question of why vgic_num_irqs does not include the LPIs
> today...
>
> > +
> > for ( i = 32; i < vgic_num_irqs(d); i++ )
> > {
> > v_target = vgic_get_target_vcpu(v, i);
> > @@ -299,6 +312,7 @@ void vgic_disable_irqs(struct vcpu *v, uint32_t r, int
> > n)
> > {
> > const unsigned long mask = r;
> > struct pending_irq *p;
> > + struct irq_desc *desc;
> > unsigned int irq;
> > unsigned long flags;
> > int i = 0;
> > @@ -307,14 +321,19 @@ void vgic_disable_irqs(struct vcpu *v, uint32_t r, int
> > n)
> > while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
> > irq = i + (32 * n);
> > v_target = vgic_get_target_vcpu(v, irq);
> > +
> > + spin_lock_irqsave(&v_target->arch.vgic.lock, flags);
> > p = irq_to_pending(v_target, irq);
> > clear_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
> > gic_remove_from_queues(v_target, irq);
>
> gic_remove_from_queues is taking v_target vGIC lock. So you just introduced a
> deadlock. You remove it in the next patch but still, we should not introduce
> regression even temporarily. This would make to difficult to bisect the
> series.
Yeah, we cannot introduce regressions like this one.
> TBH, I am not a big fan of spreading the mess of vGIC locking when we are
> going to rework the vGIC and know that this code will not be called for LPIs.
I asked for this in
alpine.DEB.2.10.1705191729560.18759@sstabellini-ThinkPad-X260, this way
we covered all basis. The double lock is bad, but the rest of the
changes look OK to me.
> BTW, this series is not bisectable because the host ITS is only enabled from
> patch #12.
>
> > - if ( p->desc != NULL )
> > + desc = p->desc;
> > + spin_unlock_irqrestore(&v_target->arch.vgic.lock, flags);
> > +
> > + if ( desc != NULL )
> > {
> > - spin_lock_irqsave(&p->desc->lock, flags);
> > - p->desc->handler->disable(p->desc);
> > - spin_unlock_irqrestore(&p->desc->lock, flags);
> > + spin_lock_irqsave(&desc->lock, flags);
> > + desc->handler->disable(desc);
> > + spin_unlock_irqrestore(&desc->lock, flags);
> > }
> > i++;
> > }
> > @@ -349,9 +368,9 @@ void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
> > while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
> > irq = i + (32 * n);
> > v_target = vgic_get_target_vcpu(v, irq);
> > + spin_lock_irqsave(&v_target->arch.vgic.lock, flags);
> > p = irq_to_pending(v_target, irq);
> > set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
> > - spin_lock_irqsave(&v_target->arch.vgic.lock, flags);
> > if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE,
> > &p->status) )
> > gic_raise_guest_irq(v_target, irq, p->priority);
> > spin_unlock_irqrestore(&v_target->arch.vgic.lock, flags);
> > @@ -460,7 +479,7 @@ void vgic_clear_pending_irqs(struct vcpu *v)
> > void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int virq)
> > {
> > uint8_t priority;
> > - struct pending_irq *iter, *n = irq_to_pending(v, virq);
> > + struct pending_irq *iter, *n;
> > unsigned long flags;
> > bool running;
> >
> > @@ -468,6 +487,8 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int
> > virq)
> >
> > spin_lock_irqsave(&v->arch.vgic.lock, flags);
> >
> > + n = irq_to_pending(v, virq);
> > +
> > /* vcpu offline */
> > if ( test_bit(_VPF_down, &v->pause_flags) )
> > {
> >
>
> Cheers,
>
> --
> Julien Grall
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |