[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 24/49] ARM: new VGIC: Add IRQ sync/flush framework

On 13/02/18 15:40, Andre Przywara wrote:

On 13/02/18 12:41, Julien Grall wrote:
Hi Andre,

On 09/02/18 14:39, Andre Przywara wrote:
gets called on guest entry and exit.
The code talking to the actual GICv2/v3 hardware is added in the
following patches.

This is based on Linux commit 0919e84c0fc1, written by Marc Zyngier.

Signed-off-by: Andre Przywara <andre.przywara@xxxxxxxxxx>
   xen/arch/arm/vgic/vgic.c | 246
   xen/arch/arm/vgic/vgic.h |   2 +
   2 files changed, 248 insertions(+)

diff --git a/xen/arch/arm/vgic/vgic.c b/xen/arch/arm/vgic/vgic.c
index a4efd1fd03..a1f77130d4 100644
--- a/xen/arch/arm/vgic/vgic.c
+++ b/xen/arch/arm/vgic/vgic.c
@@ -380,6 +380,252 @@ int vgic_inject_irq(struct domain *d, struct
vcpu *vcpu, unsigned int intid,
       return 0;
+ * vgic_prune_ap_list - Remove non-relevant interrupts from the list
+ *
+ * @vcpu: The VCPU pointer
+ *
+ * Go over the list of "interesting" interrupts, and prune those that we
+ * won't have to consider in the near future.
+ */
+static void vgic_prune_ap_list(struct vcpu *vcpu)
+    struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
+    struct vgic_irq *irq, *tmp;
+    unsigned long flags;
+    spin_lock_irqsave(&vgic_cpu->ap_list_lock, flags);
+    list_for_each_entry_safe( irq, tmp, &vgic_cpu->ap_list_head,
ap_list )

See my comment on patch #22, this is where I am worry about going
through the list every time we enter to the hypervisor from the guest.

I am not sure we can avoid this here, as this function is crucial to the
VGIC state machine.
We might later look into if we can avoid iterating through the whole
list or if we can shortcut some interrupts somehow, but I really would
be careful tinkering with this function too much.

My biggest concern is how big the list can be. If the list is quite small, then this function is ok to call when entering to the hypervisor. If you tell me it can be bigger, then I am quite reluctant to see this code in Xen. Obviously this could be solved afterwards but clearly before we make this a default option in Xen.

To be clear, I am asking how much a guest can control the size of the list.


+/* Requires the ap_list_lock to be held. */
+static int compute_ap_list_depth(struct vcpu *vcpu)
+    struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
+    struct vgic_irq *irq;
+    int count = 0;
+    ASSERT(spin_is_locked(&vgic_cpu->ap_list_lock));
+    list_for_each_entry(irq, &vgic_cpu->ap_list_head, ap_list)

Here another example.

This can be short cut, indeed. The only place we call this function is
where we compare it against the number of LRs below.
So we don't need to know the actual number of elements, but could bail
out once we reached the number of LRs.
This function here would then return a bool, comparing internally
against the number of LRs already.

This would only solve one part of the problem. Then you go sorting the list if you have more vIRQ in the list than the number of LRs.

See above for my thoughts on the list.

+ * gic_clear_lrs() - Update the VGIC state from hardware after a
guest's run.
+ * @vcpu: the VCPU.
+ *
+ * Sync back the hardware VGIC state after the guest has run, into our
+ * VGIC emulation structures, It reads the LRs and updates the
+ * struct vgic_irq, taking level/edge into account.
+ * This is the high level function which takes care of the conditions,
+ * also bails out early if there were no interrupts queued.
+ * Was: kvm_vgic_sync_hwstate()
+ */
+void gic_clear_lrs(struct vcpu *vcpu)

I think I would prefer if we stick with the KVM name.

Yes, please! ;-)
I found those names always confusing, especially gic_inject(). To be
honest I never know which is which in the KVM case as well (sync vs.
flush), so I was wondering if we call both the entry and the exit
handler "sync", but denote the direction, like:
vgic_sync_from_lrs() and vgic_sync_to_lrs().

+1 on the suggested naming :).


Julien Grall

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.