[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 7/8] xen/arm: Add support for SMMUv3 driver



Hi Rahul,

On 10/12/2020 16:57, Rahul Singh wrote:
  struct arm_smmu_strtab_cfg {
@@ -613,8 +847,13 @@ struct arm_smmu_device {
                u64                     padding;
        };
- /* IOMMU core code handle */
-       struct iommu_device             iommu;
+       /* Need to keep a list of SMMU devices */
+       struct list_head                devices;
+
+       /* Tasklets for handling evts/faults and pci page request IRQs*/
+       struct tasklet          evtq_irq_tasklet;
+       struct tasklet          priq_irq_tasklet;
+       struct tasklet          combined_irq_tasklet;
  };
/* SMMU private data for each master */
@@ -638,7 +877,6 @@ enum arm_smmu_domain_stage {
struct arm_smmu_domain {
        struct arm_smmu_device          *smmu;
-       struct mutex                    init_mutex; /* Protects smmu pointer */

Hmmm... Your commit message says the mutex would be replaced by a spinlock. However, you are dropping the lock. What I did miss?

[...]

@@ -1578,6 +1841,17 @@ static int arm_smmu_domain_finalise_s2(struct 
arm_smmu_domain *smmu_domain,
        struct arm_smmu_device *smmu = smmu_domain->smmu;
        struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
        typeof(&arm_lpae_s2_cfg.vtcr) vtcr = &arm_lpae_s2_cfg.vtcr;
+       uint64_t reg = READ_SYSREG64(VTCR_EL2);

Please don't use VTCR_EL2 here. You should be able to infer the parameter from the p2m_ipa_bits.

Also, I still don't see code that will restrict the IPA bits because on the support for the SMMU.

+
+       vtcr->tsz    = FIELD_GET(STRTAB_STE_2_VTCR_S2T0SZ, reg);
+       vtcr->sl     = FIELD_GET(STRTAB_STE_2_VTCR_S2SL0, reg);
+       vtcr->irgn   = FIELD_GET(STRTAB_STE_2_VTCR_S2IR0, reg);
+       vtcr->orgn   = FIELD_GET(STRTAB_STE_2_VTCR_S2OR0, reg);
+       vtcr->sh     = FIELD_GET(STRTAB_STE_2_VTCR_S2SH0, reg);
+       vtcr->tg     = FIELD_GET(STRTAB_STE_2_VTCR_S2TG, reg);
+       vtcr->ps     = FIELD_GET(STRTAB_STE_2_VTCR_S2PS, reg);
+
+       arm_lpae_s2_cfg.vttbr  = page_to_maddr(cfg->domain->arch.p2m.root);
vmid = arm_smmu_bitmap_alloc(smmu->vmid_map, smmu->vmid_bits);
        if (vmid < 0)
@@ -1592,6 +1866,11 @@ static int arm_smmu_domain_finalise_s2(struct 
arm_smmu_domain *smmu_domain,
                          FIELD_PREP(STRTAB_STE_2_VTCR_S2SH0, vtcr->sh) |
                          FIELD_PREP(STRTAB_STE_2_VTCR_S2TG, vtcr->tg) |
                          FIELD_PREP(STRTAB_STE_2_VTCR_S2PS, vtcr->ps);
+
+       printk(XENLOG_DEBUG
+                  "SMMUv3: d%u: vmid 0x%x vtcr 0x%"PRIpaddr" p2maddr 
0x%"PRIpaddr"\n",
+                  cfg->domain->domain_id, cfg->vmid, cfg->vtcr, cfg->vttbr);
+
        return 0;
  }

[...]

@@ -1923,8 +2239,8 @@ static int arm_smmu_init_one_queue(struct arm_smmu_device 
*smmu,
                return -ENOMEM;
        }
- if (!WARN_ON(q->base_dma & (qsz - 1))) { > - dev_info(smmu->dev, "allocated %u entries for %s\n",
+       if (unlikely(q->base_dma & (qsz - 1))) {
+               dev_warn(smmu->dev, "allocated %u entries for %s\n",
dev_warn() is not the same as WARN_ON(). But really, the first step is for you to try to change behavior of WARN_ON() in Xen.

If this doesn't go through then we can discuss about approach to mitigate it.

Cheers,

--
Julien Grall



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.