[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH] iommu: make no-quarantine mean no-quarantine


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Scott Davis <scott.davis@xxxxxxxxxx>
  • Date: Fri, 30 Apr 2021 19:27:51 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=starlab.io; dmarc=pass action=none header.from=starlab.io; dkim=pass header.d=starlab.io; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ZDVVTGrRgGXLdbJMbXSDP6eLgKQ1xo8wwryytqD0tmU=; b=eZAuQyeUruN++txJ8fLNZpIuyylUC702A7qZn4G3UYfTdR2572KtMQPlRW5hwvGH4ac/YgRS/COOaBEvvqYyxElHop6zQXGahuyW43seLqV6tazKE/dn+RjhaOX8O2jzD44rzRPq135P69Hw7gBxjr5LVCtDqy40JZWrdyXxdBqZcm/Mx8q/hfEWoT/AzPECIk6TDSYKXOmsgV7CymKhAX2ULHv2CJhxeZT0130FNhsBwFYDOvAfv4j13JRjR8yGaC+Elkds32PwZIkywYSD8dqkWtNjCqzYu09/uJty7sxoVk1phEqmYRmX1WJo+Ky9YnOLFEMJcX+jE3tWiNbUaQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=h9EqO64FmF4VTyeVNVQu+T2rL9WmO6w8S/94yHA9TGwUc9yGFqe9SRFFQr3XsZLJXYPufhM0IoZ/Yp79o2vsY6tcOjG/GpIFPXHcuFGvk/Xbeu8bXebRmeto0cEtRQlyCtQH1gyoEDyVMow47ug1NDc88rTGb4dCv95iTZzNXPSCOua38Z9GP0g65O1XJ+bnkmUMjGGGlNEUjZ8/l0EwEh1V1Q7DV3z1029nDHOjW94aNkmhIMiFPyw4QZ8NzK3JZlKwf4Ko0eB5WtYm8PvID/G4ySfrxlgEBqaZKAMMmnv2CcAfofVA3ihFfcGj8QV0RPLAal6JxWTWwiFKqVGL9A==
  • Authentication-results: suse.com; dkim=none (message not signed) header.d=none;suse.com; dmarc=none action=none header.from=starlab.io;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Ian Jackson <ian.jackson@xxxxxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "paul@xxxxxxx" <paul@xxxxxxx>
  • Delivery-date: Fri, 30 Apr 2021 19:28:03 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHXOsEjPlXZorMQi0Kt8384oCpmdqrH72EAgAC5poCAAM1cAIAAEeQAgAI1xoCAAO20gIAAiYCA
  • Thread-topic: [RFC PATCH] iommu: make no-quarantine mean no-quarantine

On 4/30/21, 3:15 AM, Jan Beulich wrote:
> So far you didn't tell us what the actual crash was. I guess it's not
> even clear to me whether it's Xen or qemu that did crash for you. But
> I have to also admit that until now it wasn't really clear to me that
> you ran Xen _under_ qemu - instead I was assuming there was an
> interaction problem with a qemu serving a guest.

I explained this in my OP, sorry if it was not clear:

> Background: I am setting up a QEMU-based development and testing environment
> for the Crucible team at Star Lab that includes emulated PCIe devices for
> passthrough and hotplug. I encountered an issue with `xl pci-assignable-add`
> that causes the host QEMU to rapidly allocate memory until getting 
> OOM-killed.

As soon as Xen writes the IQT register, the host QEMU process locks up,
starts allocating several hundred MB/sec, and is soon OOM-killed by the
host kernel.

On 4/30/21, 3:15 AM, Jan Beulich wrote:
> Interesting. This then leaves the question whether we submit a bogus
> command, or whether qemu can't deal (correctly) with a valid one here.

I did some extra debugging to inspect the index values being written to
IQT as well as the invalidation descriptors themselves and everything
appeared fine to me on Xen's end. In fact, the descriptor written by
queue_invalidate_context_sync upon map into dom_io is entirely identical
to the one it writes upon unmap from dom0, which works without issue.
This point towards a QEMU bug to me:

(gdb) c
Thread 1 hit Breakpoint 4, queue_invalidate_context_sync (...) at qinval.c:101
(gdb) bt
#0  queue_invalidate_context_sync (...) at qinval.c:85
#1  flush_context_qi (...) at qinval.c:341
#2  iommu_flush_context_device (...) at iommu.c:400
#3  domain_context_unmap_one (...) at iommu.c:1606
#4  domain_context_unmap (...) at iommu.c:1671
#5  reassign_device_ownership (...) at iommu.c:2396
#6  intel_iommu_assign_device (...) at iommu.c:2476
#7  assign_device (...) at pci.c:1545
#8  iommu_do_pci_domctl (...) at pci.c:1732
#9  iommu_do_domctl (...) at iommu.c:539
...
(gdb) print index
$2 = 552
(gdb) print qinval_entry->q.cc_inv_dsc
$3 = {
  lo = {
    type = 1,
    granu = 3,
    res_1 = 0,
    did = 0,
    sid = 256,
    fm = 0,
    res_2 = 0
  },
  hi = {
    res = 0
  }
}
(gdb) c
Thread 1 hit Breakpoint 5, qinval_next_index (...) at qinval.c:58
(gdb) bt
#0  qinval_next_index (...) at qinval.c:58
#1  queue_invalidate_wait (...) at qinval.c:159
#2  invalidate_sync (...) at qinval.c:207
#3  queue_invalidate_context_sync (...) at qinval.c:106
...
(gdb) print tail
$4 = 553
(gdb) c
Thread 1 hit Breakpoint 5, qinval_next_index (...) at qinval.c:58
(gdb) bt
#0  qinval_next_index (...) at qinval.c:58
#3  queue_invalidate_iotlb_sync (...) at qinval.c:120
#4  flush_iotlb_qi (...) at qinval.c:376
#5  iommu_flush_iotlb_dsi (...) at iommu.c:499
#6  domain_context_unmap_one (...) at iommu.c:1611
#7  domain_context_unmap (...) at iommu.c:1671
...
(gdb) print tail
$5 = 554
(gdb) c
Thread 1 hit Breakpoint 5, qinval_next_index (...) at qinval.c:58
(gdb) bt
#0  qinval_next_index (...) at qinval.c:58
#1  queue_invalidate_wait (...) at qinval.c:159
#2  invalidate_sync (...) at qinval.c:207
#3  queue_invalidate_iotlb_sync (...) at qinval.c:143
...
(gdb) print tail
$6 = 555
(gdb) c
Thread 1 hit Breakpoint 5, qinval_next_index (...) at qinval.c:58
(gdb) bt
#0  qinval_next_index (...) at qinval.c:58
#1  queue_invalidate_context_sync (...) at qinval.c:86
#2  flush_context_qi (...) at qinval.c:341
#3  iommu_flush_context_device (...) at iommu.c:400
#4  domain_context_mapping_one (...) at iommu.c:1436
#5  domain_context_mapping (...) at iommu.c:1510
#6  reassign_device_ownership (...) at iommu.c:2412
...
(gdb) print tail
$7 = 556
(gdb) c
Thread 1 hit Breakpoint 4, queue_invalidate_context_sync (...) at qinval.c:101
(gdb) print index
$8 = 556
(gdb) print qinval_entry->q.cc_inv_dsc
$9 = {
  lo = {
    type = 1,
    granu = 3,
    res_1 = 0,
    did = 0,
    sid = 256,
    fm = 0,
    res_2 = 0
  },
  hi = {
    res = 0
  }
}
(gdb) c
Continuing.
Remote connection closed

With output from dom0 and Xen like:

[   31.002214] e1000e 0000:01:00.0 eth1: removed PHC
[   31.694270] e1000e: eth1 NIC Link is Down
[   31.717849] pciback 0000:01:00.0: seizing device
[   31.719464] Already setup the GSI :20
(XEN) [   83.572804] [VT-D]d0:PCIe: unmap 0000:01:00.0
(XEN) [  808.092310] [VT-D]d32753:PCIe: map 0000:01:00.0

Good day,
Scott


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.