[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP fix





On 29/01/14 11:46, Stefano Stabellini wrote:
On Wed, 29 Jan 2014, Stefano Stabellini wrote:
On Wed, 29 Jan 2014, Oleksandr Tyshchenko wrote:
Hello all,

I just recollected about one hack which we created
as we needed to route HW IRQ in domU.

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 9d793ba..d0227b9 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -989,8 +989,6 @@ static void domcreate_launch_dm(libxl__egc *egc,
libxl__multidev *multidev,

          LOG(DEBUG, "dom%d irq %d", domid, irq);

-        ret = irq >= 0 ? xc_physdev_map_pirq(CTX->xch, domid, irq, &irq)
-                       : -EOVERFLOW;
          if (!ret)
              ret = xc_domain_irq_permission(CTX->xch, domid, irq, 1);
          if (ret < 0) {
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 2e4b11f..b54c08e 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -85,7 +85,7 @@ int domain_vgic_init(struct domain *d)
      if ( d->domain_id == 0 )
          d->arch.vgic.nr_lines = gic_number_lines() - 32;
      else
-        d->arch.vgic.nr_lines = 0; /* We don't need SPIs for the guest */
+        d->arch.vgic.nr_lines = gic_number_lines() - 32; /* We do
need SPIs for the guest */

      d->arch.vgic.shared_irqs =
          xzalloc_array(struct vgic_irq_rank, DOMAIN_NR_RANKS(d));
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 75e2df3..ba88901 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -29,6 +29,7 @@
  #include <asm/page.h>
  #include <public/domctl.h>
  #include <xsm/xsm.h>
+#include <asm/gic.h>

  static DEFINE_SPINLOCK(domctl_lock);
  DEFINE_SPINLOCK(vcpu_alloc_lock);
@@ -782,8 +783,11 @@ long
do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
              ret = -EINVAL;
          else if ( xsm_irq_permission(XSM_HOOK, d, pirq, allow) )
              ret = -EPERM;
-        else if ( allow )
-            ret = pirq_permit_access(d, pirq);
+        else if ( allow ) {
+            struct dt_irq irq = {pirq + NR_LOCAL_IRQS,0};
+            ret = pirq_permit_access(d, irq.irq);
+            gic_route_irq_to_guest(d, &irq, "");
+        }
          else
              ret = pirq_deny_access(d, pirq);
      }
(END)

It seems, the following patch can violate the logic about routing
physical IRQs only to CPU0.
In gic_route_irq_to_guest() we need to call gic_set_irq_properties()
where the one of the parameters is cpumask_of(smp_processor_id()).
But in this part of code this function can be executed on CPU1. And as
result this can cause to the fact that the wrong value would set to
target CPU mask.

Please, confirm my assumption.

That is correct.


If I am right we have to add a basic HW IRQ routing to DomU in a right way.

We could add the cpumask parameter to gic_route_irq_to_guest. Or maybe
for now we could just hardcode the cpumask of cpu0
gic_route_irq_to_guest.

However keep in mind that if you plan on routing SPIs to guests other
than dom0, receiving all the interrupts on cpu0 might not be great for
performances.

Thinking twice about it, it might be the only acceptable change for 4.4.

In Xen upstream, gic_route_irq_to_guest is only called when the dom0 is built (so on CPU0). I don't think we need this patch for Xen 4.4.

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.