[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH 1/2] x86/mm/p2m: don't needlessly limit MMIO mapping order to 4k

The P2M common code currently restricts the MMIO mapping order of any
domain with IOMMU mappings, but that is not using shared tables, to 4k.
This has been shown to have a huge performance cost when passing through
a PCI device with a very large BAR (e.g. NVIDIA P40), increasing the guest
boot time from ~20s to several minutes when iommu=no-sharept is specified
on the Xen command line.

The limitation was added by commit c3c756bd "x86/p2m: use large pages
for MMIO mappings" however the underlying implementations of p2m->set_entry
for Intel and AMD are coded to cope with mapping orders higher than 4k,
even though the IOMMU mapping function is itself currently limited to 4k,
so there is no real need to limit the order passed into the method, other
than to avoid a potential DoS caused by a long running hypercall.

In practice, mmio_order() already strictly disallows 1G mappings since the
if clause in question starts with:

    if ( 0 /*
            * Don't use 1Gb pages, to limit the iteration count in
            * set_typed_p2m_entry() when it needs to zap M2P entries
            * for a RAM range.
            */ &&

With this patch applied (and hence 2M mappings in use) the VM boot time is
restored to something reasonable. Not as fast as without iommu=no-sharept,
but within a few seconds of it.

NOTE: This patch takes the opportunity to shorten a couple of > 80
      character lines in the code.

Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
Acked-by: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
Cc: Jan Beulich <jbeulich@xxxxxxxx>
Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
Cc: Wei Liu <wei.liu2@xxxxxxxxxx>

 - Add an extra to the if clause disallowing 1G mappings to make sure
   they are not used if need_iommu_pt_sync() is true, even though the
   check is currently moot (see main comment).
 xen/arch/x86/mm/p2m.c | 19 ++++++++++---------
 1 file changed, 10 insertions(+), 9 deletions(-)

diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index a00a3c1bff..f972b4819d 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -2081,14 +2081,11 @@ static unsigned int mmio_order(const struct domain *d,
                                unsigned long start_fn, unsigned long nr)
-     * Note that the !iommu_use_hap_pt() here has three effects:
-     * - cover iommu_{,un}map_page() not having an "order" input yet,
-     * - exclude shadow mode (which doesn't support large MMIO mappings),
-     * - exclude PV guests, should execution reach this code for such.
-     * So be careful when altering this.
+     * PV guests or shadow-mode HVM guests must be restricted to 4k
+     * mappings.
-    if ( !iommu_use_hap_pt(d) ||
-         (start_fn & ((1UL << PAGE_ORDER_2M) - 1)) || !(nr >> PAGE_ORDER_2M) )
+    if ( !hap_enabled(d) || (start_fn & ((1UL << PAGE_ORDER_2M) - 1)) ||
+         !(nr >> PAGE_ORDER_2M) )
         return PAGE_ORDER_4K;
     if ( 0 /*
@@ -2096,8 +2093,12 @@ static unsigned int mmio_order(const struct domain *d,
             * set_typed_p2m_entry() when it needs to zap M2P entries
             * for a RAM range.
             */ &&
-         !(start_fn & ((1UL << PAGE_ORDER_1G) - 1)) && (nr >> PAGE_ORDER_1G) &&
-         hap_has_1gb )
+         !(start_fn & ((1UL << PAGE_ORDER_1G) - 1)) &&
+         (nr >> PAGE_ORDER_1G) &&
+         hap_has_1gb &&
+         /* disable 1G mappings if we need to keep the IOMMU in sync */
+         !need_iommu_pt_sync(d)
+        )
         return PAGE_ORDER_1G;
     if ( hap_has_2mb )

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.