[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 1/3] arm: remove irq from inflight, then change physical affinity



On Fri, 3 Mar 2017, Julien Grall wrote:
> Hi Stefano,
> 
> On 03/03/17 19:34, Stefano Stabellini wrote:
> > On Fri, 3 Mar 2017, Julien Grall wrote:
> > > On 01/03/17 23:24, Julien Grall wrote:
> > > > On 01/03/2017 22:15, Stefano Stabellini wrote:
> > > > > This patch fixes a potential race that could happen when
> > > > > gic_update_one_lr and vgic_vcpu_inject_irq run simultaneously.
> > > > > 
> > > > > When GIC_IRQ_GUEST_MIGRATING is set, we must make sure that the irq
> > > > > has
> > > > > been removed from inflight before changing physical affinity, to avoid
> > > > > concurrent accesses to p->inflight, as vgic_vcpu_inject_irq will take
> > > > > a
> > > > > different vcpu lock.
> > > > > 
> > > > > Signed-off-by: Stefano Stabellini <sstabellini@xxxxxxxxxx>
> > > > > ---
> > > > >  xen/arch/arm/gic.c | 5 +++++
> > > > >  1 file changed, 5 insertions(+)
> > > > > 
> > > > > diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> > > > > index 9522c6c..16bb150 100644
> > > > > --- a/xen/arch/arm/gic.c
> > > > > +++ b/xen/arch/arm/gic.c
> > > > > @@ -503,6 +503,11 @@ static void gic_update_one_lr(struct vcpu *v, int
> > > > > i)
> > > > >              gic_raise_guest_irq(v, irq, p->priority);
> > > > >          else {
> > > > >              list_del_init(&p->inflight);
> > > > > +            /* Remove from inflight, then change physical affinity.
> > > > > It
> > > > > +             * makes sure that when a new interrupt is received on
> > > > > the
> > > > > +             * next pcpu, inflight is already cleared. No concurrent
> > > > > +             * accesses to inflight. */
> > > > 
> > > > Coding style:
> > > > 
> > > > /*
> > > >  * ...
> > > >  */
> > > > 
> > > > > +            smp_mb();
> > > > 
> > > > Barriers are working in pair. So where is the associated barrier?
> > > > 
> > > > Also, I am still unsure why you use a dmb(ish) (implementation of
> > > > smp_mb) and not dmb(sy).
> > > 
> > > I looked a bit more into this. Quoting a commit message in Linux (see
> > > 8adbf57fc429 "irqchip: gic: use dmb ishst instead of dsb when raising a
> > > softirq"):
> > > 
> > > "When sending an SGI to another CPU, we require a barrier to ensure that
> > > any
> > > pending stores to normal memory are made visible to the recipient before
> > > the
> > > interrupt arrives.
> > > 
> > > Rather than use a vanilla dsb() (which will soon cause an assembly error
> > > on
> > > arm64) before the writel_relaxed, we can instead use dsb(ishst), since we
> > > just
> > > need to ensure that any pending normal writes are visible within the
> > > inner-shareable domain before we poke the GIC.
> > > 
> > > With this observation, we can then further weaken the barrier to a
> > > dmb(ishst), since other CPUs in the inner-shareable domain must observe
> > > the
> > > write to the distributor before the SGI is generated."
> > > 
> > > So smp_mb() is fine here. You could even use smp_wmb() because you only
> > > care
> > > you write access.
> > > 
> > > Also, I think the barrier on the other side is not necessary. Because if
> > > you
> > > received an interrupt it means the processor as observed the change in the
> > > distributor. What do you think?
> > 
> > I agree with you, I think you are right. My reasoning was the same as
> > yours.
> > 
> > I didn't use smp_wmb() because that's used between two writes, while
> > here, after the barrier we could have a read (in fact we have
> > test_and_clear_bit, but the following patches turn it into a test_bit,
> > which is a read).
> 
> Why do you care about the read? You only want to ensure the ordering between
> list_del and writing the new affinity into the GIC.

Yes, you are right. I'll change the smp_mb() into smp_wmb()

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.