[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 3/4] iommu: elide flushing for higher order map/unmap operations



Hi Paul,

On 12/6/18 3:34 PM, Paul Durrant wrote:
This patch removes any implicit flushing that occurs in the implementation
of map and unmap operations and adds new iommu_map/unmap() wrapper
functions. To maintain sematics of the iommu_legacy_map/unmap() wrapper

NIT: s/sematics/semantics/

functions, these are modified to call the new wrapper functions and then
perform an explicit flush operation.

Because VT-d currently performs two different types of flush dependent upon
whether a PTE is being modified versus merely added (i.e. replacing a non-
present PTE) 'iommu flush flags' are defined by this patch and the
iommu_ops map_page() and unmap_page() methods are modified to OR the type
of flush necessary for the PTE that has been populated or depopulated into
an accumulated flags value. The accumulated value can then be passed into
the explicit flush operation.

The ARM SMMU implementations of map_page() and unmap_page() currently
perform no implicit flushing and therefore the modified methods do not
adjust the flush flags.

I am a bit confused with the explanation here. map_page()/unmap_page() will require to flush the IOMMU TLBs. So what do you mean by implicit?

[...]

diff --git a/xen/drivers/passthrough/arm/smmu.c 
b/xen/drivers/passthrough/arm/smmu.c
index 9612c0fddc..5d12639e97 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -2534,9 +2534,12 @@ static int __must_check arm_smmu_iotlb_flush_all(struct 
domain *d)
        return 0;
  }
-static int __must_check arm_smmu_iotlb_flush(struct domain *d, dfn_t dfn,
-                                             unsigned int page_count)
+static int __must_check arm_smmu_iotlb_flush(
+       struct domain *d, dfn_t dfn, unsigned int page_count,
+       unsigned int flush_flags)

Can we keep the parameters aligned to (?

  {
+       ASSERT(flush_flags);
+
        /* ARM SMMU v1 doesn't have flush by VMA and VMID */
        return arm_smmu_iotlb_flush_all(d);
  }
@@ -2731,8 +2734,9 @@ static void arm_smmu_iommu_domain_teardown(struct domain 
*d)
        xfree(xen_domain);
  }
-static int __must_check arm_smmu_map_page(struct domain *d, dfn_t dfn,
-                                         mfn_t mfn, unsigned int flags)
+static int __must_check arm_smmu_map_page(
+       struct domain *d, dfn_t dfn, mfn_t mfn, unsigned int flags,
+       unsigned int *flush_flags)

Same here.

  {
        p2m_type_t t;

[...]

@@ -345,7 +352,26 @@ int iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t 
mfn,
      return rc;
  }
-int iommu_legacy_unmap(struct domain *d, dfn_t dfn, unsigned int page_order)
+int iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t mfn,
+                     unsigned int page_order, unsigned int flags)
+{
+    unsigned int flush_flags = 0;

newline here.

+    int rc = iommu_map(d, dfn, mfn, page_order, flags, &flush_flags);

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.