[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [for-4.15][PATCH v3 2/3] xen/x86: iommu: Ignore IOMMU mapping requests when a domain is dying
From: Julien Grall <jgrall@xxxxxxxxxx> The new x86 IOMMU page-tables allocator will release the pages when relinquishing the domain resources. However, this is not sufficient when the domain is dying because nothing prevents page-table to be allocated. Currently page-table allocations can only happen from iommu_map(). As the domain is dying, there is no good reason to continue to modify the IOMMU page-tables. In order to observe d->is_dying correctly, we need to rely on per-arch locking, so the check to ignore IOMMU mapping is added on the per-driver map_page() callback. Signed-off-by: Julien Grall <jgrall@xxxxxxxxxx> --- Changes in v3: - Patch added. This is a replacement of "xen/iommu: iommu_map: Don't crash the domain if it is dying" --- xen/drivers/passthrough/amd/iommu_map.c | 13 +++++++++++++ xen/drivers/passthrough/vtd/iommu.c | 13 +++++++++++++ xen/drivers/passthrough/x86/iommu.c | 3 +++ 3 files changed, 29 insertions(+) diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c index d3a8b1aec766..ed78a083ba12 100644 --- a/xen/drivers/passthrough/amd/iommu_map.c +++ b/xen/drivers/passthrough/amd/iommu_map.c @@ -285,6 +285,19 @@ int amd_iommu_map_page(struct domain *d, dfn_t dfn, mfn_t mfn, spin_lock(&hd->arch.mapping_lock); + /* + * IOMMU mapping request can be safely ignored when the domain is dying. + * + * hd->arch.mapping_lock guarantees that d->is_dying will be observed + * before any page tables are freed (see iommu_free_pgtables() and + * iommu_clear_root_pgtable()). + */ + if ( d->is_dying ) + { + spin_unlock(&hd->arch.mapping_lock); + return 0; + } + rc = amd_iommu_alloc_root(d); if ( rc ) { diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c index e1871f6c2bc1..239a63f74f64 100644 --- a/xen/drivers/passthrough/vtd/iommu.c +++ b/xen/drivers/passthrough/vtd/iommu.c @@ -1771,6 +1771,19 @@ static int __must_check intel_iommu_map_page(struct domain *d, dfn_t dfn, spin_lock(&hd->arch.mapping_lock); + /* + * IOMMU mapping request can be safely ignored when the domain is dying. + * + * hd->arch.mapping_lock guarantees that d->is_dying will be observed + * before any page tables are freed (see iommu_free_pgtables() and + * iommu_clear_root_pgtable()). + */ + if ( d->is_dying ) + { + spin_unlock(&hd->arch.mapping_lock); + return 0; + } + pg_maddr = addr_to_dma_page_maddr(d, dfn_to_daddr(dfn), 1); if ( !pg_maddr ) { diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c index f54fc8093f18..faa0078db595 100644 --- a/xen/drivers/passthrough/x86/iommu.c +++ b/xen/drivers/passthrough/x86/iommu.c @@ -273,6 +273,9 @@ int iommu_free_pgtables(struct domain *d) /* * Pages will be moved to the free list below. So we want to * clear the root page-table to avoid any potential use after-free. + * + * After this call, no more IOMMU mapping can happen. + * */ hd->platform_ops->clear_root_pgtable(d); -- 2.17.1
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |