[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC v4 6/8] xen/iommu: smmu-v3: Add Xen specific code to enable the ported driver

On 01/03/2018 05:47 AM, Manish Jaggi wrote:

Hi Sameer,

Hi Manish,

+/* Xen: Type definitions for iommu_domain */
+/* Xen: Dummy iommu_domain */
+struct iommu_domain {
+    /* Runtime SMMU configuration for this iommu_domain */
+    struct arm_smmu_domain        *priv;
Can we use a more meaningful name in place of priv.

I believe this was taken from the iommu_domain structure in SMMUv2 which was based on an ancient version of Linux. It looks like recent Linux will not use priv, so I am wondering why it is added here?

+    unsigned int type;
+    /* Dummy compatibility defines */
+    unsigned long pgsize_bitmap;
+    struct iommu_domain_geometry geometry;
+    atomic_t ref;
+    /*
+     * Used to link iommu_domain contexts for a same domain.
+     * There is at least one per-SMMU to used by the domain.
+     */
+    struct list_head        list;
+/* Xen: Describes information required for a Xen domain */
+struct arm_smmu_xen_domain {
+    spinlock_t            lock;
+    /* List of iommu domains associated to this domain */
+    struct list_head        iommu_domains;
+ * Xen: Information about each device stored in dev->archdata.iommu
+ *
+ * The dev->archdata.iommu stores the iommu_domain (runtime configuration of
+ * the SMMU).
+struct arm_smmu_xen_device {
+    struct iommu_domain *domain;
domain name is confusing, if you read just the variable name it is not easy to understand that it is a struct domain pointer or few other structures which have _domain in their names.
Same comment for all usages of variables with just the name domain.

If this is used by Xen only code, then it should be alright. Which name do you suggest? iommu_domain?


+ * Xen: The pgtable_ops are used by the S1 translations, so return the dummy
+ * address.
+ */
+#define alloc_io_pgtable_ops(f, c, o) ((struct io_pgtable_ops *)0xDEADBEEF)
+#define free_io_pgtable_ops(o) (o = 0)
+/* Xen: Define wrapper for requesting IRQs */
+#define IRQF_ONESHOT 0
+typedef void (*irq_handler_t)(int, void *, struct cpu_user_regs *);
+static inline int devm_request_irq(struct device *dev, unsigned int irq,
+                   irq_handler_t handler, unsigned long irqflags,
+                   const char *devname, void *dev_id)
+    /*TODO: Check if we really need to set a type */
+    irq_set_type(irq, IRQ_TYPE_EDGE_BOTH);
+    return request_irq(irq, irqflags, handler, devname, dev_id);
+int devm_request_threaded_irq(struct device *dev, unsigned int irq, irq_handler_t handler,
+                  irq_handler_t thread_fn, unsigned long irqflags,
+                  const char *devname, void *dev_id)
+    return devm_request_irq(dev, irq, thread_fn, irqflags, devname, dev_id);
Is it possible to change the name from threaded to something more meaningful as IIUC in xen we dont  have threaded irqs. Though the code is coming from linux, but it has to be called/named in the place it is intended to be used

What do you mean? This is a wrapper for Linux. So we should keep the name as it is.


@@ -433,6 +807,7 @@ enum pri_resp {
+#if 0 /* Xen: No MSI support in this iteration */
  enum arm_smmu_msi_index {
@@ -457,6 +832,7 @@ static phys_addr_t arm_smmu_msi_cfg[ARM_SMMU_MAX_MSIS][3] = {
IMHO can we avoid #if 0 from the code, unless we intend to use the code in future.

In the past, I made the mistake to remove all unecessary code from SMMUv2. Few months after, we decided to delete it and import directly from Linux with limited modifications. This was the best choice because it is easier to track difference.

We are in the same situation here. We want to stay as close as Linux. This means no renaming, no code removal, and very limited change in the code to accommodate Xen.

In that particular case, we likely going to want to support MSIs because an implementation may only support that.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.