[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH v4 07/13] x86/IRQ: target online CPUs when binding guest IRQ


  • To: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Jan Beulich <JBeulich@xxxxxxxx>
  • Date: Tue, 16 Jul 2019 07:41:37 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1;spf=pass smtp.mailfrom=suse.com;dmarc=pass action=none header.from=suse.com;dkim=pass header.d=suse.com;arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=amtD0DjelUJyExkoK+bSNEAQS3ivVpyrM7uiTRxcBA0=; b=R1NNvQlLGwAEHM/A+XSToUVuT3I/wnYEVFsrXAqJD9l07yZusUo+89gogTn0/giM72wDeEz/Skcb8rcE2hJUj2p7oNI4Cb1ME8eDjg95h4kFe2OfH1HQjkKutJMSjLaKBoEwc/j+EIB5y3VlYc4fBrWycQBhq+gDi5/r7ZC5efFyBZQahPE977TNzTmuX2hkw7FxGtscoCdDqXkBd6v7SGX/dSduT+WNQqJPlMRb5vfxzRkSrD3snkDx1NcSSMe2ZALC1VK5rkNoXrNmmkT/J5kSRFdP1wbKibVE2469OBgHqjC/odhG7Ung4//mJTYRL3vJr+yweauodHyXb07jSQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=E4bRhDkl7pNMLanBGukwk+5vvDg72QEP4bLLgZ3nXQPtN7uywWstKhV8motAUMIO5+dA8QBk7CD3tMTWzF7qguewSM+aKkQPvjsnrf6LS783ovEgH4NQ+yPlmkLCjj2v3zGkCr9gcbOMUxHmv6c6OKlCO96aEyPJ+w/XSMbp2J6Dta7tqJGTVhrpi4TuViHcQP5v8WmW9Dw4C8szT6pxC4S8ktvBiyzyExcc+JB23Nkk8IhAxJgcJHVVnlvYFTTT/HR7tmJA4xqpVSVEhcAwS8QBLFbUYL1yGdRNwAL4goJgxIz8Mo2fvhbizf0jV1enbToke52bUSY/R1Tf/8/JqQ==
  • Authentication-results: spf=none (sender IP is ) smtp.mailfrom=JBeulich@xxxxxxxx;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Delivery-date: Tue, 16 Jul 2019 07:46:29 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHVO6nmfFYuEqv+8kqqHKmb7c6wdQ==
  • Thread-topic: [PATCH v4 07/13] x86/IRQ: target online CPUs when binding guest IRQ

fixup_irqs() skips interrupts without action. Hence such interrupts can
retain affinity to just offline CPUs. With "noirqbalance" in effect,
pirq_guest_bind() so far would have left them alone, resulting in a non-
working interrupt.

Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
Reviewed-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
Acked-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
---
v3: New.
---
I've not observed this problem in practice - the change is just the
result of code inspection after having noticed action-less IRQs in 'i'
debug key output pointing at all parked/offline CPUs.

--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -1703,9 +1703,27 @@ int pirq_guest_bind(struct vcpu *v, stru
  
          desc->status |= IRQ_GUEST;
  
-        /* Attempt to bind the interrupt target to the correct CPU. */
-        if ( !opt_noirqbalance && (desc->handler->set_affinity != NULL) )
-            desc->handler->set_affinity(desc, cpumask_of(v->processor));
+        /*
+         * Attempt to bind the interrupt target to the correct (or at least
+         * some online) CPU.
+         */
+        if ( desc->handler->set_affinity )
+        {
+            const cpumask_t *affinity = NULL;
+
+            if ( !opt_noirqbalance )
+                affinity = cpumask_of(v->processor);
+            else if ( !cpumask_intersects(desc->affinity, &cpu_online_map) )
+            {
+                cpumask_setall(desc->affinity);
+                affinity = &cpumask_all;
+            }
+            else if ( !cpumask_intersects(desc->arch.cpu_mask,
+                                          &cpu_online_map) )
+                affinity = desc->affinity;
+            if ( affinity )
+                desc->handler->set_affinity(desc, affinity);
+        }
  
          desc->status &= ~IRQ_DISABLED;
          desc->handler->startup(desc);

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.