[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [RFC PATCH v2 17/22] ARM: vGIC: introduce vgic_lock_vcpu_irq()



Since a VCPU can own multiple IRQs, the natural locking order is to take
a VCPU lock first, then the individual per-IRQ locks.
However there are situations where the target VCPU is not known without
looking into the struct pending_irq first, which usually means we need to
take the IRQ lock first.
To solve this problem, we provide a function called vgic_lock_vcpu_irq(),
which takes a locked struct pending_irq() and returns with *both* the
VCPU and the IRQ lock held.
This is done by looking up the target VCPU, then briefly dropping the
IRQ lock, taking the VCPU lock, then grabbing the per-IRQ lock again.
Before returning there is a check whether something has changed in the
brief period where we didn't hold the IRQ lock, retrying in this (very
rare) case.

Signed-off-by: Andre Przywara <andre.przywara@xxxxxxx>
---
 xen/arch/arm/vgic.c | 42 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 42 insertions(+)

diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 1ba0010..0e6dfe5 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -224,6 +224,48 @@ int vcpu_vgic_free(struct vcpu *v)
     return 0;
 }
 
+/**
+ * vgic_lock_vcpu_irq(): lock both the pending_irq and the corresponding VCPU
+ *
+ * @v: the VCPU (for private IRQs)
+ * @p: pointer to the locked struct pending_irq
+ * @flags: pointer to the IRQ flags used when locking the VCPU
+ *
+ * The function takes a locked IRQ and returns with both the IRQ and the
+ * corresponding VCPU locked. This is non-trivial due to the locking order
+ * being actually the other way round (VCPU first, then IRQ).
+ *
+ * Returns: pointer to the VCPU this IRQ is targeting.
+ */
+struct vcpu *vgic_lock_vcpu_irq(struct vcpu *v, struct pending_irq *p,
+                                unsigned long *flags)
+{
+    struct vcpu *target_vcpu;
+
+    ASSERT(spin_is_locked(&p->lock));
+
+    target_vcpu = vgic_get_target_vcpu(v, p);
+    spin_unlock(&p->lock);
+
+    do
+    {
+        struct vcpu *current_vcpu;
+
+        spin_lock_irqsave(&target_vcpu->arch.vgic.lock, *flags);
+        spin_lock(&p->lock);
+
+        current_vcpu = vgic_get_target_vcpu(v, p);
+
+        if ( target_vcpu->vcpu_id == current_vcpu->vcpu_id )
+            return target_vcpu;
+
+        spin_unlock(&p->lock);
+        spin_unlock_irqrestore(&target_vcpu->arch.vgic.lock, *flags);
+
+        target_vcpu = current_vcpu;
+    } while (1);
+}
+
 struct vcpu *vgic_get_target_vcpu(struct vcpu *v, struct pending_irq *p)
 {
     struct vgic_irq_rank *rank = vgic_rank_irq(v, p->irq);
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.