[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [RFC PATCH v2 21/22] ARM: vITS: injecting LPIs: use pending_irq lock



Instead of using an atomic access and hoping for the best, let's use
the new pending_irq lock now to make sure we read a sane version of
the target VCPU.
That still doesn't solve the problem mentioned in the comment, but
paves the way for future improvements.

Signed-off-by: Andre Przywara <andre.przywara@xxxxxxx>
---
 xen/arch/arm/gic-v3-lpi.c | 14 ++++++++------
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/gic-v3-lpi.c b/xen/arch/arm/gic-v3-lpi.c
index 2306b58..9db26ed 100644
--- a/xen/arch/arm/gic-v3-lpi.c
+++ b/xen/arch/arm/gic-v3-lpi.c
@@ -140,20 +140,22 @@ void vgic_vcpu_inject_lpi(struct domain *d, unsigned int 
virq)
 {
     /*
      * TODO: this assumes that the struct pending_irq stays valid all of
-     * the time. We cannot properly protect this with the current locking
-     * scheme, but the future per-IRQ lock will solve this problem.
+     * the time. We cannot properly protect this with the current code,
+     * but a future refcounting will solve this problem.
      */
     struct pending_irq *p = irq_to_pending(d->vcpu[0], virq);
+    unsigned long flags;
     unsigned int vcpu_id;
 
     if ( !p )
         return;
 
-    vcpu_id = ACCESS_ONCE(p->vcpu_id);
-    if ( vcpu_id >= d->max_vcpus )
-          return;
+    vgic_irq_lock(p, flags);
+    vcpu_id = p->vcpu_id;
+    vgic_irq_unlock(p, flags);
 
-    vgic_vcpu_inject_irq(d->vcpu[vcpu_id], virq);
+    if ( vcpu_id < d->max_vcpus )
+        vgic_vcpu_inject_irq(d->vcpu[vcpu_id], virq);
 }
 
 /*
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.