[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen-unstable: pci-passthrough regression bisected to: x86/smp: use APIC ALLBUT destination shorthand when possible


  • To: Sander Eikelenboom <linux@xxxxxxxxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Tue, 11 Feb 2020 15:00:22 +0100
  • Authentication-results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@xxxxxxxxxx; spf=Pass smtp.mailfrom=roger.pau@xxxxxxxxxx; spf=None smtp.helo=postmaster@xxxxxxxxxxxxxxx
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Tue, 11 Feb 2020 14:00:38 +0000
  • Ironport-sdr: xq8egreGaxFQ7yyDmsWZsvBmxmY0+0zWnvG8miMLI/kCWnCVS43VnsYsZEgq4h6LU/GFsd510L PoAQ8IvqSHAfoehpHhbvMaMRykhIUe4tidIg9VKW1zijWSn37vtPaO98KvQDQEXHeNxaFrhT6/ W5pAtQ6hZvhGsPASmX69vWtw7DoxDhhlvyGzEVAgrHEcgPTdV+qHoowGbkbF5ye84FYpoRAWFH o9TXbZmn+X00QQ+6amAi13lEUTOhb4n/Nl1gdxILVV226kyQsiHlNg46Kh/QPxdKets2ZWfTHh tjo=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Mon, Feb 10, 2020 at 09:49:30PM +0100, Sander Eikelenboom wrote:
> On 03/02/2020 14:21, Roger Pau Monné wrote:
> > On Mon, Feb 03, 2020 at 01:44:06PM +0100, Sander Eikelenboom wrote:
> >> On 03/02/2020 13:41, Roger Pau Monné wrote:
> >>> On Mon, Feb 03, 2020 at 01:30:55PM +0100, Sander Eikelenboom wrote:
> >>>> On 03/02/2020 13:23, Roger Pau Monné wrote:
> >>>>> On Mon, Feb 03, 2020 at 09:33:51AM +0100, Sander Eikelenboom wrote:
> >>>>>> Hi Roger,
> >>>>>>
> >>>>>> Last week I encountered an issue with the PCI-passthrough of a USB 
> >>>>>> controller. 
> >>>>>> In the guest I get:
> >>>>>>     [ 1143.313756] xhci_hcd 0000:00:05.0: xHCI host not responding to 
> >>>>>> stop endpoint command.
> >>>>>>     [ 1143.334825] xhci_hcd 0000:00:05.0: xHCI host controller not 
> >>>>>> responding, assume dead
> >>>>>>     [ 1143.347364] xhci_hcd 0000:00:05.0: HC died; cleaning up
> >>>>>>     [ 1143.356407] usb 1-2: USB disconnect, device number 2
> >>>>>>
> >>>>>> Bisection turned up as the culprit: 
> >>>>>>    commit 5500d265a2a8fa63d60c08beb549de8ec82ff7a5
> >>>>>>    x86/smp: use APIC ALLBUT destination shorthand when possible
> >>>>>
> >>>>> Sorry to hear that, let see if we can figure out what's wrong.
> >>>>
> >>>> No problem, that is why I test stuff :)
> >>>>
> >>>>>> I verified by reverting that commit and now it works fine again.
> >>>>>
> >>>>> Does the same controller work fine when used in dom0?
> >>>>
> >>>> Will test that, but as all other pci devices in dom0 work fine,
> >>>> I assume this controller would also work fine in dom0 (as it has also
> >>>> worked fine for ages with PCI-passthrough to that guest and still works
> >>>> fine when reverting the referenced commit).
> >>>
> >>> Is this the only device that fails to work when doing pci-passthrough,
> >>> or other devices also don't work with the mentioned change applied?
> >>>
> >>> Have you tested on other boxes?
> >>>
> >>>> I don't know if your change can somehow have a side effect
> >>>> on latency around the processing of pci-passthrough ?
> >>>
> >>> Hm, the mentioned commit should speed up broadcast IPIs, but I don't
> >>> see how it could slow down other interrupts. Also I would think the
> >>> domain is not receiving interrupts from the device, rather than
> >>> interrupts being slow.
> >>>
> >>> Can you also paste the output of lspci -v for that xHCI device from
> >>> dom0?
> >>>
> >>> Thanks, Roger.
> >>
> >> Will do this evening including the testing in dom0 etc.
> >> Will also see if there is any pattern when observing /proc/interrupts in
> >> the guest.
> > 
> > Thanks! I also have some trivial patch that I would like you to try,
> > just to discard send_IPI_mask clearing the scratch_cpumask under
> > another function feet.
> > 
> > Roger.
> 
> Hi Roger,
> 
> Took a while, but I was able to run some tests now.
> 
> I also forgot a detail in the first report (probably still a bit tired from 
> FOSDEM), 
> namely: the device passedthrough works OK for a while before I get the kernel 
> message.
> 
> I tested the patch and it looks like it makes the issue go away,
> I tested for a day, while without the patch (or revert of the commit) the 
> device
> will give problems within a few hours.

Thanks, I have another patch for you to try, which will likely make
your system crash. Could you give it a try and paste the log output?

Thanks, Roger.
---8<---
commit 909880219efc4fe3c25536454d04f07bfe61e3b1
Author: Roger Pau Monne <roger.pau@xxxxxxxxxx>
Date:   Tue Feb 11 11:14:48 2020 +0100

    x86: add accessors for scratch cpu mask
    
    Current usage of the per-CPU scratch cpumask is dangerous since
    there's no way to figure out if the mask is already being used except
    for manual code inspection of all the callers and possible call paths.
    
    This is unsafe and not reliable, so introduce a minimal get/put
    infrastructure to prevent nested usage of the scratch mask.
    
    Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>

diff --git a/xen/arch/x86/io_apic.c b/xen/arch/x86/io_apic.c
index e98e08e9c8..4ee261b632 100644
--- a/xen/arch/x86/io_apic.c
+++ b/xen/arch/x86/io_apic.c
@@ -2236,10 +2236,11 @@ int io_apic_set_pci_routing (int ioapic, int pin, int 
irq, int edge_level, int a
     entry.vector = vector;
 
     if (cpumask_intersects(desc->arch.cpu_mask, TARGET_CPUS)) {
-        cpumask_t *mask = this_cpu(scratch_cpumask);
+        cpumask_t *mask = get_scratch_cpumask();
 
         cpumask_and(mask, desc->arch.cpu_mask, TARGET_CPUS);
         SET_DEST(entry, logical, cpu_mask_to_apicid(mask));
+        put_scratch_cpumask();
     } else {
         printk(XENLOG_ERR "IRQ%d: no target CPU (%*pb vs %*pb)\n",
                irq, CPUMASK_PR(desc->arch.cpu_mask), CPUMASK_PR(TARGET_CPUS));
@@ -2433,10 +2434,11 @@ int ioapic_guest_write(unsigned long physbase, unsigned 
int reg, u32 val)
 
     if ( cpumask_intersects(desc->arch.cpu_mask, TARGET_CPUS) )
     {
-        cpumask_t *mask = this_cpu(scratch_cpumask);
+        cpumask_t *mask = get_scratch_cpumask();
 
         cpumask_and(mask, desc->arch.cpu_mask, TARGET_CPUS);
         SET_DEST(rte, logical, cpu_mask_to_apicid(mask));
+        put_scratch_cpumask();
     }
     else
     {
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index cc2eb8e925..7ecf5376e3 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -196,7 +196,7 @@ static void _clear_irq_vector(struct irq_desc *desc)
 {
     unsigned int cpu, old_vector, irq = desc->irq;
     unsigned int vector = desc->arch.vector;
-    cpumask_t *tmp_mask = this_cpu(scratch_cpumask);
+    cpumask_t *tmp_mask = get_scratch_cpumask();
 
     BUG_ON(!valid_irq_vector(vector));
 
@@ -223,7 +223,10 @@ static void _clear_irq_vector(struct irq_desc *desc)
     trace_irq_mask(TRC_HW_IRQ_CLEAR_VECTOR, irq, vector, tmp_mask);
 
     if ( likely(!desc->arch.move_in_progress) )
+    {
+        put_scratch_cpumask();
         return;
+    }
 
     /* If we were in motion, also clear desc->arch.old_vector */
     old_vector = desc->arch.old_vector;
@@ -236,6 +239,7 @@ static void _clear_irq_vector(struct irq_desc *desc)
         per_cpu(vector_irq, cpu)[old_vector] = ~irq;
     }
 
+    put_scratch_cpumask();
     release_old_vec(desc);
 
     desc->arch.move_in_progress = 0;
@@ -1152,10 +1156,11 @@ static void irq_guest_eoi_timer_fn(void *data)
         break;
 
     case ACKTYPE_EOI:
-        cpu_eoi_map = this_cpu(scratch_cpumask);
+        cpu_eoi_map = get_scratch_cpumask();
         cpumask_copy(cpu_eoi_map, action->cpu_eoi_map);
         spin_unlock_irq(&desc->lock);
         on_selected_cpus(cpu_eoi_map, set_eoi_ready, desc, 0);
+        put_scratch_cpumask();
         return;
     }
 
@@ -2531,12 +2536,12 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
     unsigned int irq;
     static int warned;
     struct irq_desc *desc;
+    cpumask_t *affinity = get_scratch_cpumask();
 
     for ( irq = 0; irq < nr_irqs; irq++ )
     {
         bool break_affinity = false, set_affinity = true;
         unsigned int vector;
-        cpumask_t *affinity = this_cpu(scratch_cpumask);
 
         if ( irq == 2 )
             continue;
@@ -2640,6 +2645,8 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
                    irq, CPUMASK_PR(affinity));
     }
 
+    put_scratch_cpumask();
+
     /* That doesn't seem sufficient.  Give it 1ms. */
     local_irq_enable();
     mdelay(1);
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 9b33829084..bded19717b 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -1271,7 +1271,7 @@ void put_page_from_l1e(l1_pgentry_t l1e, struct domain 
*l1e_owner)
              (l1e_owner == pg_owner) )
         {
             struct vcpu *v;
-            cpumask_t *mask = this_cpu(scratch_cpumask);
+            cpumask_t *mask = get_scratch_cpumask();
 
             cpumask_clear(mask);
 
@@ -1288,6 +1288,7 @@ void put_page_from_l1e(l1_pgentry_t l1e, struct domain 
*l1e_owner)
 
             if ( !cpumask_empty(mask) )
                 flush_tlb_mask(mask);
+            put_scratch_cpumask();
         }
 #endif /* CONFIG_PV_LDT_PAGING */
         put_page(page);
@@ -2912,7 +2913,7 @@ static int _get_page_type(struct page_info *page, 
unsigned long type,
                  * vital that no other CPUs are left with mappings of a frame
                  * which is about to become writeable to the guest.
                  */
-                cpumask_t *mask = this_cpu(scratch_cpumask);
+                cpumask_t *mask = get_scratch_cpumask();
 
                 BUG_ON(in_irq());
                 cpumask_copy(mask, d->dirty_cpumask);
@@ -2928,6 +2929,7 @@ static int _get_page_type(struct page_info *page, 
unsigned long type,
                     perfc_incr(need_flush_tlb_flush);
                     flush_tlb_mask(mask);
                 }
+                put_scratch_cpumask();
 
                 /* We lose existing type and validity. */
                 nx &= ~(PGT_type_mask | PGT_validated);
@@ -3644,7 +3646,7 @@ long do_mmuext_op(
         case MMUEXT_TLB_FLUSH_MULTI:
         case MMUEXT_INVLPG_MULTI:
         {
-            cpumask_t *mask = this_cpu(scratch_cpumask);
+            cpumask_t *mask = get_scratch_cpumask();
 
             if ( unlikely(currd != pg_owner) )
                 rc = -EPERM;
@@ -3654,12 +3656,17 @@ long do_mmuext_op(
                                    mask)) )
                 rc = -EINVAL;
             if ( unlikely(rc) )
+            {
+                put_scratch_cpumask();
                 break;
+            }
 
             if ( op.cmd == MMUEXT_TLB_FLUSH_MULTI )
                 flush_tlb_mask(mask);
             else if ( __addr_ok(op.arg1.linear_addr) )
                 flush_tlb_one_mask(mask, op.arg1.linear_addr);
+            put_scratch_cpumask();
+
             break;
         }
 
@@ -3692,7 +3699,7 @@ long do_mmuext_op(
             else if ( likely(cache_flush_permitted(currd)) )
             {
                 unsigned int cpu;
-                cpumask_t *mask = this_cpu(scratch_cpumask);
+                cpumask_t *mask = get_scratch_cpumask();
 
                 cpumask_clear(mask);
                 for_each_online_cpu(cpu)
@@ -3700,6 +3707,7 @@ long do_mmuext_op(
                                              per_cpu(cpu_sibling_mask, cpu)) )
                         __cpumask_set_cpu(cpu, mask);
                 flush_mask(mask, FLUSH_CACHE);
+                put_scratch_cpumask();
             }
             else
                 rc = -EINVAL;
@@ -4165,12 +4173,13 @@ long do_mmu_update(
          * Force other vCPU-s of the affected guest to pick up L4 entry
          * changes (if any).
          */
-        unsigned int cpu = smp_processor_id();
-        cpumask_t *mask = per_cpu(scratch_cpumask, cpu);
+        cpumask_t *mask = get_scratch_cpumask();
 
-        cpumask_andnot(mask, pt_owner->dirty_cpumask, cpumask_of(cpu));
+        cpumask_andnot(mask, pt_owner->dirty_cpumask,
+                       cpumask_of(smp_processor_id()));
         if ( !cpumask_empty(mask) )
             flush_mask(mask, FLUSH_TLB_GLOBAL | FLUSH_ROOT_PGTBL);
+        put_scratch_cpumask();
     }
 
     perfc_add(num_page_updates, i);
@@ -4361,7 +4370,7 @@ static int __do_update_va_mapping(
             mask = d->dirty_cpumask;
             break;
         default:
-            mask = this_cpu(scratch_cpumask);
+            mask = get_scratch_cpumask();
             rc = vcpumask_to_pcpumask(d, const_guest_handle_from_ptr(bmap_ptr,
                                                                      void),
                                       mask);
@@ -4381,7 +4390,7 @@ static int __do_update_va_mapping(
             mask = d->dirty_cpumask;
             break;
         default:
-            mask = this_cpu(scratch_cpumask);
+            mask = get_scratch_cpumask();
             rc = vcpumask_to_pcpumask(d, const_guest_handle_from_ptr(bmap_ptr,
                                                                      void),
                                       mask);
@@ -4392,6 +4401,9 @@ static int __do_update_va_mapping(
         break;
     }
 
+    if ( mask && mask != d->dirty_cpumask )
+        put_scratch_cpumask();
+
     return rc;
 }
 
diff --git a/xen/arch/x86/msi.c b/xen/arch/x86/msi.c
index c85cf9f85a..1ec1cc51d3 100644
--- a/xen/arch/x86/msi.c
+++ b/xen/arch/x86/msi.c
@@ -159,13 +159,15 @@ void msi_compose_msg(unsigned vector, const cpumask_t 
*cpu_mask, struct msi_msg
 
     if ( cpu_mask )
     {
-        cpumask_t *mask = this_cpu(scratch_cpumask);
+        cpumask_t *mask;
 
         if ( !cpumask_intersects(cpu_mask, &cpu_online_map) )
             return;
 
+        mask = get_scratch_cpumask();
         cpumask_and(mask, cpu_mask, &cpu_online_map);
         msg->dest32 = cpu_mask_to_apicid(mask);
+        put_scratch_cpumask();
     }
 
     msg->address_hi = MSI_ADDR_BASE_HI;
diff --git a/xen/include/asm-x86/smp.h b/xen/include/asm-x86/smp.h
index 1aa55d41e1..b994488d9f 100644
--- a/xen/include/asm-x86/smp.h
+++ b/xen/include/asm-x86/smp.h
@@ -26,6 +26,21 @@ DECLARE_PER_CPU(cpumask_var_t, cpu_sibling_mask);
 DECLARE_PER_CPU(cpumask_var_t, cpu_core_mask);
 DECLARE_PER_CPU(cpumask_var_t, scratch_cpumask);
 
+static inline cpumask_t *scratch_cpumask(const char *fn)
+{
+    static DEFINE_PER_CPU(const char *, scratch_cpumask_use);
+
+    if ( fn && unlikely(this_cpu(scratch_cpumask_use)) )
+        panic("scratch CPU mask already in use by %s\n",
+              this_cpu(scratch_cpumask_use));
+    this_cpu(scratch_cpumask_use) = fn;
+
+    return fn ? this_cpu(scratch_cpumask) : NULL;
+}
+
+#define get_scratch_cpumask() scratch_cpumask(__func__)
+#define put_scratch_cpumask() ((void)scratch_cpumask(NULL))
+
 /*
  * Do we, for platform reasons, need to actually keep CPUs online when we
  * would otherwise prefer them to be off?


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.