[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH v2 2/2] x86/IRQ: allocate guest array of max size only for shareable IRQs



... and increase default "irq_max_guests" to 32.

It's not necessary to have an array of a size more than 1 for non-shareable
IRQs and it might impact scalability in case of high "irq_max_guests"
values being used - every IRQ in the system including MSIs would be
supplied with an array of that size.

Since it's now less impactful to use higher "irq_max_guests" value - bump
the default to 32. That should give more headroom for future systems.

Signed-off-by: Igor Druzhinin <igor.druzhinin@xxxxxxxxxx>
---

New in v2.
This is suggested by Jan and is optional for me.

---
 docs/misc/xen-command-line.pandoc | 2 +-
 xen/arch/x86/irq.c                | 7 ++++---
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/docs/misc/xen-command-line.pandoc 
b/docs/misc/xen-command-line.pandoc
index f5f230c..dea2a22 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -1644,7 +1644,7 @@ This option is ignored in **pv-shim** mode.
 ### irq_max_guests (x86)
 > `= <integer>`
 
-> Default: `16`
+> Default: `32`
 
 Maximum number of guests IRQ could be shared between, i.e. a limit on
 the number of guests it is possible to start each having assigned a device
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 5ae9846..70b7a53 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -440,7 +440,7 @@ int __init init_irq_data(void)
         irq_to_desc(irq)->irq = irq;
 
     if ( !irq_max_guests || irq_max_guests > 255)
-        irq_max_guests = 16;
+        irq_max_guests = 32;
 
 #ifdef CONFIG_PV
     /* Never allocate the hypercall vector or Linux/BSD fast-trap vector. */
@@ -1540,6 +1540,7 @@ int pirq_guest_bind(struct vcpu *v, struct pirq *pirq, 
int will_share)
     unsigned int        irq;
     struct irq_desc         *desc;
     irq_guest_action_t *action, *newaction = NULL;
+    unsigned int        max_nr_guests = will_share ? irq_max_guests : 1;
     int                 rc = 0;
 
     WARN_ON(!spin_is_locked(&v->domain->event_lock));
@@ -1571,7 +1572,7 @@ int pirq_guest_bind(struct vcpu *v, struct pirq *pirq, 
int will_share)
         {
             spin_unlock_irq(&desc->lock);
             if ( (newaction = xmalloc_bytes(sizeof(irq_guest_action_t) +
-                  irq_max_guests * sizeof(action->guest[0]))) != NULL &&
+                  max_nr_guests * sizeof(action->guest[0]))) != NULL &&
                  zalloc_cpumask_var(&newaction->cpu_eoi_map) )
                 goto retry;
             xfree(newaction);
@@ -1640,7 +1641,7 @@ int pirq_guest_bind(struct vcpu *v, struct pirq *pirq, 
int will_share)
         goto retry;
     }
 
-    if ( action->nr_guests == irq_max_guests )
+    if ( action->nr_guests == max_nr_guests )
     {
         printk(XENLOG_G_INFO "Cannot bind IRQ%d to dom%d. "
                "Already at max share %u, increase with irq_max_guests= 
option.\n",
-- 
2.7.4




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.