[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] xen/arm: smmuv1: remove iommu group when deassign a device



Hi,

On 27/04/2022 17:15, Rahul Singh wrote:
When a device is deassigned from the domain it is required to remove the
iommu group.

This read wrong to me. We should not need to re-create the IOMMU group (and call arm_smmu_add_device()) every time a device is re-assigned.


If we don't remove the group, the next time when we assign
a device, SME and S2CR will not be setup correctly for the device
because of that SMMU fault will be observed.

I think this is a bug fix for 0435784cc75dcfef3b5f59c29deb1dbb84265ddb. If so, please add a Fixes tag.


Signed-off-by: Rahul Singh <rahul.singh@xxxxxxx>
---
  xen/drivers/passthrough/arm/smmu.c | 2 ++
  1 file changed, 2 insertions(+)

diff --git a/xen/drivers/passthrough/arm/smmu.c 
b/xen/drivers/passthrough/arm/smmu.c
index 5cacb2dd99..9a31c332d0 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -1690,6 +1690,8 @@ static void arm_smmu_detach_dev(struct iommu_domain 
*domain, struct device *dev)
        if (cfg)
                arm_smmu_master_free_smes(cfg);
+ iommu_group_put(dev_iommu_group(dev));
+       dev_iommu_group(dev) = NULL;
  }

The goal of arm_smmu_detach_dev() is to revert the change made in arm_smmu_attach_dev(). But looking at the code, neither the IOMMU group nor the smes are allocated in arm_smmu_attach_dev().

Are the SMES meant to be re-allocated everytime we assign to a different domain? If yes, the allocation should be done in arm_smmu_attach_dev().

If not, then we should not free the SMES here

IIUC, the SMES have to be re-allocated every time a device is assigned. Therefore, I think we should move the call to arm_smmu_master_alloc_smes() out of the detach callback and in a helper that would be used when removing a device (not yet supported by Xen).

Cheers,

--
Julien Grall



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.