| 
    
 [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC PATCH 22/49] ARM: new VGIC: Implement virtual IRQ injection
 Hi Andre, On 09/02/18 14:39, Andre Przywara wrote: Provide a vgic_queue_irq_unlock() function which decides whether a given IRQ needs to be queued to a VCPU's ap_list. This should be called whenever an IRQ becomes pending or enabled, either as a result of a hardware IRQ injection, from devices emulated by Xen (like the architected timer) or from MMIO accesses to the distributor emulation. Also provides the necessary functions to allow to inject an IRQ to a guest. Since this is the first code that starts using our locking mechanism, we add some (hopefully) clear documentation of our locking strategy and requirements along with this patch. This is based on Linux commit 81eeb95ddbab, written by Christoffer Dall. Signed-off-by: Andre Przywara <andre.przywara@xxxxxxxxxx> --- xen/arch/arm/vgic/vgic.c | 224 +++++++++++++++++++++++++++++++++++++++++++++++ xen/arch/arm/vgic/vgic.h | 10 +++ 2 files changed, 234 insertions(+) diff --git a/xen/arch/arm/vgic/vgic.c b/xen/arch/arm/vgic/vgic.c index 3075091caa..f517df6d00 100644 --- a/xen/arch/arm/vgic/vgic.c +++ b/xen/arch/arm/vgic/vgic.c @@ -21,6 +21,32 @@ #include <asm/arm_vgic.h> #include "vgic.h"+/*+ * Locking order is always: + * kvm->lock (mutex) You probably want to update the locking order to match Xen one. In that case, I am not sure if we need to take the domain lock in the code? + * its->cmd_lock (mutex) + * its->its_lock (mutex) + * vgic_cpu->ap_list_lock + * kvm->lpi_list_lock + * vgic_irq->irq_lock + * + * If you need to take multiple locks, always take the upper lock first, + * then the lower ones, e.g. first take the its_lock, then the irq_lock. + * If you are already holding a lock and need to take a higher one, you + * have to drop the lower ranking lock first and re-aquire it after having s/re-aquite/acquire/ + * taken the upper one. + * + * When taking more than one ap_list_lock at the same time, always take the + * lowest numbered VCPU's ap_list_lock first, so: + * vcpuX->vcpu_id < vcpuY->vcpu_id: + * spin_lock(vcpuX->arch.vgic_cpu.ap_list_lock); + * spin_lock(vcpuY->arch.vgic_cpu.ap_list_lock); + * + * Since the VGIC must support injecting virtual interrupts from ISRs, we have + * to use the spin_lock_irqsave/spin_unlock_irqrestore versions of outer + * spinlocks for any lock that may be taken while injecting an interrupt. It is quite nice to see the locking explained in the file and in general a lot of explanation within the code :). I am not sure to understand why you check whether irq->vcpu is NULL. If the interrupt is active, then irq->vcpu should be NULL. Did I miss anything? The indentation looks wrong here. + return NULL; + + return irq->target_vcpu; + } + + /* If neither active nor pending and enabled, then this IRQ should not Comment style: /* * ... switch ( ... ) I would add an ASSERT_UNREACHABLE(). + return false; +} + +/* + * Check whether an IRQ needs to (and can) be queued to a VCPU's ap list. + * Do the queuing if necessary, taking the right locks in the right order. + * Returns true when the IRQ was queued, false otherwise. + * + * Needs to be entered with the IRQ lock already held, but will return + * with all locks dropped. + */ +bool vgic_queue_irq_unlock(struct domain *d, struct vgic_irq *irq, + unsigned long flags) Indentation. Also same remark as from vgic_inject_irq. No-one seems to care about the return (even in KVM :)). vcpu_unblock will only "unblock" a vCPU that is blocked. It won't notify a running vCPU. So you want to have something similar to: vcpu_unblock(vcpu); if ( running && vcpu != current ) smp_send_event_check_mask(...); It is probably worth to introduce an helper for that. I was expecting the list to be sorted here. But you seem to do it only in vgic_flush_lr_state() which is quite interesting. I can foresee quite a few issues with this choice on Xen:1) You compute the size of ap list in vgic_flush_lr_state() and take lock on every IRQ one by one. A guest could be nasty and make that list quite big by make IRQs pending but never "active" them (i.e read IAR). 2) This might be an issue while checking whether you need to deliver an interrupt (vgic_vcpu_pending_irq) because the list is not sorted. + irq->vcpu = vcpu; + + spin_unlock(&irq->irq_lock); + spin_unlock_irqrestore(&vcpu->arch.vgic_cpu.ap_list_lock, flags); + + running = vcpu->is_running; + vcpu_unblock(vcpu); + if ( running && vcpu != current ) + smp_send_event_check_mask(cpumask_of(vcpu->processor)); + + return true; +} + +/** + * vgic_inject_irq - Inject an IRQ from a device to the vgic + * @d: The domain pointer + * @vcpu: The vCPU for PPIs + * @intid: The INTID to inject a new state to. + * @level: Edge-triggered: true: to trigger the interrupt + * false: to ignore the call + * Level-sensitive true: raise the input signal + * false: lower the input signal + * + * The VGIC is not concerned with devices being active-LOW or active-HIGH for + * level-sensitive interrupts. You can think of the level parameter as 1 + * being HIGH and 0 being LOW and all devices being active-HIGH. + */ +int vgic_inject_irq(struct domain *d, struct vcpu *vcpu, unsigned int intid, + bool level) Indentation. 
 Cheers, -- Julien Grall _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel 
 
 
  | 
  
![]()  | 
            
         Lists.xenproject.org is hosted with RackSpace, monitoring our  |