[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 06/24] ARM: GICv3 ITS: introduce host LPI array



Hi,

On 27/10/16 23:59, Stefano Stabellini wrote:
On Wed, 28 Sep 2016, Andre Przywara wrote:
The number of LPIs on a host can be potentially huge (millions),
although in practise will be mostly reasonable. So prematurely allocating
an array of struct irq_desc's for each LPI is not an option.
However Xen itself does not care about LPIs, as every LPI will be injected
into a guest (Dom0 for now).
Create a dense data structure (8 Bytes) for each LPI which holds just
enough information to determine the virtual IRQ number and the VCPU into
which the LPI needs to be injected.
Also to not artificially limit the number of LPIs, we create a 2-level
table for holding those structures.
This patch introduces functions to initialize these tables and to
create, lookup and destroy entries for a given LPI.
We allocate and access LPI information in a way that does not require
a lock.

Signed-off-by: Andre Przywara <andre.przywara@xxxxxxx>
---
 xen/arch/arm/gic-its.c        | 154 ++++++++++++++++++++++++++++++++++++++++++
 xen/include/asm-arm/gic-its.h |  18 +++++
 2 files changed, 172 insertions(+)

diff --git a/xen/arch/arm/gic-its.c b/xen/arch/arm/gic-its.c
index 88397bc..2140e4a 100644
--- a/xen/arch/arm/gic-its.c
+++ b/xen/arch/arm/gic-its.c
@@ -18,18 +18,31 @@

 #include <xen/config.h>
 #include <xen/lib.h>
+#include <xen/sched.h>
 #include <xen/err.h>
 #include <xen/device_tree.h>
 #include <xen/libfdt/libfdt.h>
 #include <asm/p2m.h>
+#include <asm/domain.h>
 #include <asm/io.h>
 #include <asm/gic.h>
 #include <asm/gic_v3_defs.h>
 #include <asm/gic-its.h>

+/* LPIs on the host always go to a guest, so no struct irq_desc for them. */
+union host_lpi {
+    uint64_t data;
+    struct {
+        uint64_t virt_lpi:32;
+        uint64_t dom_id:16;
+        uint64_t vcpu_id:16;
+    };
+};

Why not the following?

  union host_lpi {
      uint64_t data;
      struct {
          uint32_t virt_lpi;
          uint16_t dom_id;
          uint16_t vcpu_id;
      };
  };


 /* Global state */
 static struct {
     uint8_t *lpi_property;
+    union host_lpi **host_lpis;
     int host_lpi_bits;
 } lpi_data;

@@ -43,6 +56,26 @@ static DEFINE_PER_CPU(void *, pending_table);
 #define MAX_HOST_LPI_BITS                                                \
         min_t(unsigned int, lpi_data.host_lpi_bits, CONFIG_HOST_LPI_BITS)
 #define MAX_HOST_LPIS   (BIT(MAX_HOST_LPI_BITS) - 8192)
+#define HOST_LPIS_PER_PAGE      (PAGE_SIZE / sizeof(union host_lpi))
+
+static union host_lpi *gic_find_host_lpi(uint32_t lpi, struct domain *d)

I take "lpi" is the physical lpi here. Maybe we would rename it to "plpi"
for clarity.

+1 here. We tend to use the prefix 'p' for physical and 'v' for virtual (e.g virq/pirq, vcpu/pcpu). I'd like to see the same for the LPIs.

While we are here, I think the function should be named gic_find_plpi.



+{
+    union host_lpi *hlpi;
+
+    if ( lpi < 8192 || lpi >= MAX_HOST_LPIS + 8192 )
+        return NULL;
+
+    lpi -= 8192;
+    if ( !lpi_data.host_lpis[lpi / HOST_LPIS_PER_PAGE] )
+        return NULL;
+
+    hlpi = &lpi_data.host_lpis[lpi / HOST_LPIS_PER_PAGE][lpi % 
HOST_LPIS_PER_PAGE];

I realize I am sometimes obsessive about this, but division operations
are expensive and this is on the hot path, so I would do:

#define HOST_LPIS_PER_PAGE      (PAGE_SIZE >> 3)

unsigned int table = lpi / HOST_LPIS_PER_PAGE;

then use table throughout this function.


+    if ( d && hlpi->dom_id != d->domain_id )
+        return NULL;

I think this function is very useful so I would avoid making any domain
checks here: one day we might want to retrieve hlpi even if hlpi->dom_id
!= d->domain_id. I would move the domain check outside.

+1.

Regards,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.