[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC v01 0/3] xen/arm: introduce IOMMU driver for OMAP


  • To: "xen-devel@xxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxx>
  • From: "Sundareson, Prabindh" <prabu@xxxxxx>
  • Date: Thu, 23 Jan 2014 03:02:29 +0000
  • Accept-language: en-US
  • Delivery-date: Thu, 23 Jan 2014 03:03:13 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>
  • Thread-index: Ac8X5yD2fTL5f+daQeOuuca6efUKFQ==
  • Thread-topic: [RFC v01 0/3] xen/arm: introduce IOMMU driver for OMAP

Hello Andrii,

Dumb question - wouldn't there be a need to lock the data structures while 
updating/ copying ? Or does the Xen architecture somehow prevents other updates 
?

I see 2 IPs being used - IPU and DSP, what did you have in mind for the 3rd IP 
? GPU ?

If there are some quick instructions available to test, I can try this out.

regards,
Prabu


-----Original Message-----
From: xen-devel-bounces@xxxxxxxxxxxxx [mailto:xen-devel-bounces@xxxxxxxxxxxxx] 
On Behalf Of xen-devel-request@xxxxxxxxxxxxx
Sent: Wednesday, January 22, 2014 9:23 PM
To: xen-devel@xxxxxxxxxxxxx
Subject: Xen-devel Digest, Vol 107, Issue 352

Send Xen-devel mailing list submissions to
        xen-devel@xxxxxxxxxxxxx

To subscribe or unsubscribe via the World Wide Web, visit
        http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel
or, via email, send a message with subject or body 'help' to
        xen-devel-request@xxxxxxxxxxxxx

You can reach the person managing the list at
        xen-devel-owner@xxxxxxxxxxxxx

When replying, please edit your Subject line so it is more specific than "Re: 
Contents of Xen-devel digest..."


Today's Topics:

   1. [RFC v01 0/3] xen/arm: introduce IOMMU driver for OMAP
      platforms (Andrii Tseglytskyi)
   2. [RFC v01 2/3] arm: omap: translate iommu mapping to 4K    pages
      (Andrii Tseglytskyi)
   3. [RFC v01 1/3] arm: omap: introduce iommu module
      (Andrii Tseglytskyi)
   4. [RFC v01 3/3] arm: omap: cleanup iopte allocations
      (Andrii Tseglytskyi)


----------------------------------------------------------------------

Message: 1
Date: Wed, 22 Jan 2014 17:52:02 +0200
From: Andrii Tseglytskyi <andrii.tseglytskyi@xxxxxxxxxxxxxxx>
To: xen-devel@xxxxxxxxxxxxx
Subject: [Xen-devel] [RFC v01 0/3] xen/arm: introduce IOMMU driver for
        OMAP    platforms
Message-ID:
        <1390405925-1764-1-git-send-email-andrii.tseglytskyi@xxxxxxxxxxxxxxx>

Hi,

The following patch series is an RFC for possible implementation of simple MMU 
module, which is designed to translate IPA to MA for peripheral processors like 
GPU / IPU for OMAP platforms. Currently on our OMAP platform (OMAP5 panda) we 
have 3 external MMUs which need to be handled properly.

It would be great to get a community feedback - will this be useful for Xen 
project?

Let me describe an algorithm briefly. It is simple and straightforward.
The following simple logic is used to translate addresses from IPA to MA:

1. During boot time guest domain creates "pagetable" for external MMU IP.
Pagetable is a singletone data structure, which is stored in ususal kernel heap 
memory. All memory mappings for corresponding MMU are stored inside it.
Format of "pagetable" is well defined.

2. Guest domain enables peripheral remote processor. As a part of enable 
sequence kernel allocates chunks of heap memory needed for remote processor and 
stores pointers to allocated chunks in already created "pagetable". After it 
writes a physical address of pagetable to MMU configuration register. As result 
MMU IP knows about all allocations, and remote processor can use them directly 
in its software.

3. Xen omap mmu driver creates a trap for access to MMU configuration registers.
It reads a physical address of "pagetable" from MMU register and creates a copy 
of it in own memory. As result - we have two similar configuration data 
structures - first - in guest domain kernel, second - in Xen hypervisor.

4. Xen omap mmu driver parses its own copy of pagetable and translate all 
physical addresses to corresponding machine addresses using existing p2m API 
call.
After it writes a physical address  of its pagetable (with already translated 
PA to MA) to MMU IP configuration registers and returns control to guest domain.

As a result - guest domain continues enabling remote processor with it MMU and 
MMU will use new pagetable, modified by Xen omap mmu driver. New pagetable will 
be used directly by MMU IP, and its new structure will be hidden for guest 
domain kernel, it won't know anything about p2m translation.

Verified with Xen 4.4-unstable, Linux kernel 3.8 as Dom0, Linux(Android) kernel 
3.4 as DomU.
Target platform OMAP5 panda.

Thank you for your attention,

Regards,

Andrii Tseglytskyi (3):
  arm: omap: introduce iommu module
  arm: omap: translate iommu mapping to 4K pages
  arm: omap: cleanup iopte allocations

 xen/arch/arm/Makefile     |    1 +
 xen/arch/arm/io.c         |    1 +
 xen/arch/arm/io.h         |    1 +
 xen/arch/arm/omap_iommu.c |  492 +++++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 495 insertions(+)
 create mode 100644 xen/arch/arm/omap_iommu.c

--
1.7.9.5




------------------------------

Message: 2
Date: Wed, 22 Jan 2014 17:52:04 +0200
From: Andrii Tseglytskyi <andrii.tseglytskyi@xxxxxxxxxxxxxxx>
To: xen-devel@xxxxxxxxxxxxx
Subject: [Xen-devel] [RFC v01 2/3] arm: omap: translate iommu mapping
        to 4K   pages
Message-ID:
        <1390405925-1764-3-git-send-email-andrii.tseglytskyi@xxxxxxxxxxxxxxx>

Patch introduces the following algorithm:
- enumerates all first level translation entries
- for each section creates 256 pages, each page is 4096 bytes
- for each supersection creates 4096 pages, each page is 4096 bytes
- flush cache to synchronize Cortex M15 and IOMMU

This algorithm make possible to use 4K mapping only.

Change-Id: Ie2cf45f23e0c170e9ba9d58f8dbb917348fdbd33
Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@xxxxxxxxxxxxxxx>
---
 xen/arch/arm/omap_iommu.c |   50 +++++++++++++++++++++++++++++++++++++++++----
 1 file changed, 46 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/omap_iommu.c b/xen/arch/arm/omap_iommu.c
index 4dab30f..7ec03a2 100644
--- a/xen/arch/arm/omap_iommu.c
+++ b/xen/arch/arm/omap_iommu.c
@@ -72,6 +72,9 @@
 #define PTRS_PER_IOPTE         (1UL << (IOPGD_SHIFT - IOPTE_SMALL_SHIFT))
 #define IOPTE_TABLE_SIZE       (PTRS_PER_IOPTE * sizeof(u32))
 
+/* 16 sections in supersection */
+#define IOSECTION_PER_IOSUPER  (1UL << (IOSUPER_SHIFT - IOPGD_SHIFT))
+
 /*
  * some descriptor attributes.
  */
@@ -117,6 +120,9 @@ static struct mmu_info *mmu_list[] = {
        &omap_dsp_mmu,
 };
 
+static bool translate_supersections_to_pages = true;
+static bool translate_sections_to_pages = true;
+
 #define mmu_for_each(pfunc, data)                                              
\
 ({                                                                             
                                \
        u32 __i;                                                                
                        \
@@ -213,6 +219,29 @@ static u32 mmu_translate_pgentry(struct domain *dom, u32 
iopgd, u32 da, u32 mask
        return vaddr;
 }
 
+static u32 mmu_iopte_alloc(struct mmu_info *mmu, struct domain *dom, u32 
iopgd, u32 sect_num)
+{
+       u32 *iopte = NULL;
+       u32 i;
+
+       iopte = xzalloc_bytes(PAGE_SIZE);
+       if (!iopte) {
+               printk("%s Fail to alloc 2nd level table\n", mmu->name);
+               return 0;
+       }
+
+       for (i = 0; i < PTRS_PER_IOPTE; i++) {
+               u32 da, vaddr, iopgd_tmp;
+               da = (sect_num << IOSECTION_SHIFT) + (i << IOPTE_SMALL_SHIFT);
+               iopgd_tmp = (iopgd & IOSECTION_MASK) + (i << IOPTE_SMALL_SHIFT);
+               vaddr = mmu_translate_pgentry(dom, iopgd_tmp, da, 
IOPTE_SMALL_MASK);
+               iopte[i] = vaddr | IOPTE_SMALL;
+       }
+
+       flush_xen_dcache_va_range(iopte, PAGE_SIZE);
+       return __pa(iopte) | IOPGD_TABLE;
+}
+
 /*
  * on boot table is empty
  */
@@ -245,13 +274,26 @@ static int mmu_translate_pagetable(struct domain *dom, 
struct mmu_info *mmu)
 
                /* "supersection" 16 Mb */
                if (iopgd_is_super(iopgd)) {
-                       da = i << IOSECTION_SHIFT;
-                       mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, 
da, IOSUPER_MASK);
+                       if(likely(translate_supersections_to_pages)) {
+                               u32 j, iopgd_tmp;
+                               for (j = 0 ; j < IOSECTION_PER_IOSUPER; j++) {
+                                       iopgd_tmp = iopgd + (j * 
IOSECTION_SIZE);
+                                       mmu->pagetable[i + j] = 
mmu_iopte_alloc(mmu, dom, iopgd_tmp, i);
+                               }
+                               i += (j - 1);
+                       } else {
+                               da = i << IOSECTION_SHIFT;
+                               mmu->pagetable[i] = mmu_translate_pgentry(dom, 
iopgd, da, IOSUPER_MASK);
+                       }
 
                /* "section" 1Mb */
                } else if (iopgd_is_section(iopgd)) {
-                       da = i << IOSECTION_SHIFT;
-                       mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, 
da, IOSECTION_MASK);
+                       if (likely(translate_sections_to_pages)) {
+                               mmu->pagetable[i] = mmu_iopte_alloc(mmu, dom, 
iopgd, i);
+                       } else {
+                               da = i << IOSECTION_SHIFT;
+                               mmu->pagetable[i] = mmu_translate_pgentry(dom, 
iopgd, da, IOSECTION_MASK);
+                       }
 
                /* "table" */
                } else if (iopgd_is_table(iopgd)) {
-- 
1.7.9.5




------------------------------

Message: 3
Date: Wed, 22 Jan 2014 17:52:03 +0200
From: Andrii Tseglytskyi <andrii.tseglytskyi@xxxxxxxxxxxxxxx>
To: xen-devel@xxxxxxxxxxxxx
Subject: [Xen-devel] [RFC v01 1/3] arm: omap: introduce iommu module
Message-ID:
        <1390405925-1764-2-git-send-email-andrii.tseglytskyi@xxxxxxxxxxxxxxx>

omap IOMMU module is designed to handle access to external
omap MMUs, connected to the L3 bus.

Change-Id: I96bbf2738e9dd2e21662e0986ca15c60183e669e
Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@xxxxxxxxxxxxxxx>
---
 xen/arch/arm/Makefile     |    1 +
 xen/arch/arm/io.c         |    1 +
 xen/arch/arm/io.h         |    1 +
 xen/arch/arm/omap_iommu.c |  415 +++++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 418 insertions(+)
 create mode 100644 xen/arch/arm/omap_iommu.c

diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 003ac84..cb0b385 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -14,6 +14,7 @@ obj-y += io.o
 obj-y += irq.o
 obj-y += kernel.o
 obj-y += mm.o
+obj-y += omap_iommu.o
 obj-y += p2m.o
 obj-y += percpu.o
 obj-y += guestcopy.o
diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c
index a6db00b..3281b67 100644
--- a/xen/arch/arm/io.c
+++ b/xen/arch/arm/io.c
@@ -26,6 +26,7 @@ static const struct mmio_handler *const mmio_handlers[] =
 {
     &vgic_distr_mmio_handler,
     &vuart_mmio_handler,
+       &mmu_mmio_handler,
 };
 #define MMIO_HANDLER_NR ARRAY_SIZE(mmio_handlers)
 
diff --git a/xen/arch/arm/io.h b/xen/arch/arm/io.h
index 8d252c0..acb5dff 100644
--- a/xen/arch/arm/io.h
+++ b/xen/arch/arm/io.h
@@ -42,6 +42,7 @@ struct mmio_handler {
 
 extern const struct mmio_handler vgic_distr_mmio_handler;
 extern const struct mmio_handler vuart_mmio_handler;
+extern const struct mmio_handler mmu_mmio_handler;
 
 extern int handle_mmio(mmio_info_t *info);
 
diff --git a/xen/arch/arm/omap_iommu.c b/xen/arch/arm/omap_iommu.c
new file mode 100644
index 0000000..4dab30f
--- /dev/null
+++ b/xen/arch/arm/omap_iommu.c
@@ -0,0 +1,415 @@
+/*
+ * xen/arch/arm/omap_iommu.c
+ *
+ * Andrii Tseglytskyi <andrii.tseglytskyi@xxxxxxxxxxxxxxx>
+ * Copyright (c) 2013 GlobalLogic
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <xen/config.h>
+#include <xen/lib.h>
+#include <xen/errno.h>
+#include <xen/mm.h>
+#include <xen/vmap.h>
+#include <xen/init.h>
+#include <xen/sched.h>
+#include <xen/stdbool.h>
+#include <asm/system.h>
+#include <asm/current.h>
+#include <asm/io.h>
+#include <asm/p2m.h>
+
+#include "io.h"
+
+/* register where address of page table is stored */
+#define MMU_TTB                        0x4c
+
+/*
+ * "L2 table" address mask and size definitions.
+ */
+
+/* 1st level translation */
+#define IOPGD_SHIFT            20
+#define IOPGD_SIZE             (1UL << IOPGD_SHIFT)
+#define IOPGD_MASK             (~(IOPGD_SIZE - 1))
+
+/* "supersection" - 16 Mb */
+#define IOSUPER_SHIFT          24
+#define IOSUPER_SIZE           (1UL << IOSUPER_SHIFT)
+#define IOSUPER_MASK           (~(IOSUPER_SIZE - 1))
+
+/* "section"  - 1 Mb */
+#define IOSECTION_SHIFT                20
+#define IOSECTION_SIZE         (1UL << IOSECTION_SHIFT)
+#define IOSECTION_MASK         (~(IOSECTION_SIZE - 1))
+
+/* 4096 first level descriptors for "supersection" and "section" */
+#define PTRS_PER_IOPGD         (1UL << (32 - IOPGD_SHIFT))
+#define IOPGD_TABLE_SIZE       (PTRS_PER_IOPGD * sizeof(u32))
+
+/* 2nd level translation */
+
+/* "small page" - 4Kb */
+#define IOPTE_SMALL_SHIFT              12
+#define IOPTE_SMALL_SIZE               (1UL << IOPTE_SMALL_SHIFT)
+#define IOPTE_SMALL_MASK               (~(IOPTE_SMALL_SIZE - 1))
+
+/* "large page" - 64 Kb */
+#define IOPTE_LARGE_SHIFT              16
+#define IOPTE_LARGE_SIZE               (1UL << IOPTE_LARGE_SHIFT)
+#define IOPTE_LARGE_MASK               (~(IOPTE_LARGE_SIZE - 1))
+
+/* 256 second level descriptors for "small" and "large" pages */
+#define PTRS_PER_IOPTE         (1UL << (IOPGD_SHIFT - IOPTE_SMALL_SHIFT))
+#define IOPTE_TABLE_SIZE       (PTRS_PER_IOPTE * sizeof(u32))
+
+/*
+ * some descriptor attributes.
+ */
+#define IOPGD_TABLE            (1 << 0)
+#define IOPGD_SECTION  (2 << 0)
+#define IOPGD_SUPER            (1 << 18 | 2 << 0)
+
+#define iopgd_is_table(x)      (((x) & 3) == IOPGD_TABLE)
+#define iopgd_is_section(x)    (((x) & (1 << 18 | 3)) == IOPGD_SECTION)
+#define iopgd_is_super(x)      (((x) & (1 << 18 | 3)) == IOPGD_SUPER)
+
+#define IOPTE_SMALL            (2 << 0)
+#define IOPTE_LARGE            (1 << 0)
+
+#define iopte_is_small(x)      (((x) & 2) == IOPTE_SMALL)
+#define iopte_is_large(x)      (((x) & 3) == IOPTE_LARGE)
+#define iopte_offset(x)                ((x) & IOPTE_SMALL_MASK)
+
+struct mmu_info {
+       const char                      *name;
+       paddr_t                         mem_start;
+       u32                                     mem_size;
+       u32                                     *pagetable;
+       void __iomem            *mem_map;
+};
+
+static struct mmu_info omap_ipu_mmu = {
+       .name           = "IPU_L2_MMU",
+       .mem_start      = 0x55082000,
+       .mem_size       = 0x1000,
+       .pagetable      = NULL,
+};
+
+static struct mmu_info omap_dsp_mmu = {
+       .name           = "DSP_L2_MMU",
+       .mem_start      = 0x4a066000,
+       .mem_size       = 0x1000,
+       .pagetable      = NULL,
+};
+
+static struct mmu_info *mmu_list[] = {
+       &omap_ipu_mmu,
+       &omap_dsp_mmu,
+};
+
+#define mmu_for_each(pfunc, data)                                              
\
+({                                                                             
                                \
+       u32 __i;                                                                
                        \
+       int __res = 0;                                                          
                \
+                                                                               
                                \
+       for (__i = 0; __i < ARRAY_SIZE(mmu_list); __i++) {      \
+               __res |= pfunc(mmu_list[__i], data);                    \
+       }                                                                       
                                \
+       __res;                                                                  
                        \
+})
+
+static int mmu_check_mem_range(struct mmu_info *mmu, paddr_t addr)
+{
+       if ((addr >= mmu->mem_start) && (addr < (mmu->mem_start + 
mmu->mem_size)))
+               return 1;
+
+       return 0;
+}
+
+static inline struct mmu_info *mmu_lookup(u32 addr)
+{
+       u32 i;
+
+       for (i = 0; i < ARRAY_SIZE(mmu_list); i++) {
+               if (mmu_check_mem_range(mmu_list[i], addr))
+                       return mmu_list[i];
+       }
+
+       return NULL;
+}
+
+static inline u32 mmu_virt_to_phys(u32 reg, u32 va, u32 mask)
+{
+       return (reg & mask) | (va & (~mask));
+}
+
+static inline u32 mmu_phys_to_virt(u32 reg, u32 pa, u32 mask)
+{
+       return (reg & ~mask) | pa;
+}
+
+static int mmu_mmio_check(struct vcpu *v, paddr_t addr)
+{
+       return mmu_for_each(mmu_check_mem_range, addr);
+}
+
+static int mmu_copy_pagetable(struct mmu_info *mmu)
+{
+       void __iomem *pagetable = NULL;
+       u32 pgaddr;
+
+       ASSERT(mmu);
+
+       /* read address where kernel MMU pagetable is stored */
+       pgaddr = readl(mmu->mem_map + MMU_TTB);
+       pagetable = ioremap(pgaddr, IOPGD_TABLE_SIZE);
+       if (!pagetable) {
+               printk("%s: %s failed to map pagetable\n",
+                          __func__, mmu->name);
+               return -EINVAL;
+       }
+
+       /*
+        * pagetable can be changed since last time
+        * we accessed it therefore we need to copy it each time
+        */
+       memcpy(mmu->pagetable, pagetable, IOPGD_TABLE_SIZE);
+
+       iounmap(pagetable);
+
+       return 0;
+}
+
+#define mmu_dump_pdentry(da, iopgd, paddr, maddr, vaddr, mask)                 
                                                \
+{                                                                              
                                                                                
                                \
+       const char *sect_type = (iopgd_is_table(iopgd) || (mask == 
IOPTE_SMALL_MASK) ||                         \
+                                                       (mask == 
IOPTE_LARGE_MASK)) ? "table"                                                    
       \
+                                                       : iopgd_is_super(iopgd) 
? "supersection"                                                        \
+                                                       : 
iopgd_is_section(iopgd) ? "section"                                             
              \
+                                                       : "Unknown section";    
                                                                                
        \
+       printk("[iopgd] %s da 0x%08x iopgd 0x%08x paddr 0x%08x maddr 0x%pS 
vaddr 0x%08x mask 0x%08x\n",\
+                  sect_type, da, iopgd, paddr, _p(maddr), vaddr, mask);        
                                                        \
+}
+
+static u32 mmu_translate_pgentry(struct domain *dom, u32 iopgd, u32 da, u32 
mask)
+{
+       u32 vaddr, paddr;
+       paddr_t maddr;
+
+       paddr = mmu_virt_to_phys(iopgd, da, mask);
+       maddr = p2m_lookup(dom, paddr);
+       vaddr = mmu_phys_to_virt(iopgd, maddr, mask);
+
+       return vaddr;
+}
+
+/*
+ * on boot table is empty
+ */
+static int mmu_translate_pagetable(struct domain *dom, struct mmu_info *mmu)
+{
+       u32 i;
+       int res;
+       bool table_updated = false;
+
+       ASSERT(dom);
+       ASSERT(mmu);
+
+       /* copy pagetable from  domain to xen */
+       res = mmu_copy_pagetable(mmu);
+       if (res) {
+               printk("%s: %s failed to map pagetable memory\n",
+                          __func__, mmu->name);
+               return res;
+       }
+
+       /* 1-st level translation */
+       for (i = 0; i < PTRS_PER_IOPGD; i++) {
+               u32 da;
+               u32 iopgd = mmu->pagetable[i];
+
+               if (!iopgd)
+                       continue;
+
+               table_updated = true;
+
+               /* "supersection" 16 Mb */
+               if (iopgd_is_super(iopgd)) {
+                       da = i << IOSECTION_SHIFT;
+                       mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, 
da, IOSUPER_MASK);
+
+               /* "section" 1Mb */
+               } else if (iopgd_is_section(iopgd)) {
+                       da = i << IOSECTION_SHIFT;
+                       mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, 
da, IOSECTION_MASK);
+
+               /* "table" */
+               } else if (iopgd_is_table(iopgd)) {
+                       u32 j, mask;
+                       u32 iopte = iopte_offset(iopgd);
+
+                       /* 2-nd level translation */
+                       for (j = 0; j < PTRS_PER_IOPTE; j++, iopte += 
IOPTE_SMALL_SIZE) {
+
+                               /* "small table" 4Kb */
+                               if (iopte_is_small(iopgd)) {
+                                       da = (i << IOSECTION_SHIFT) + (j << 
IOPTE_SMALL_SHIFT);
+                                       mask = IOPTE_SMALL_MASK;
+
+                               /* "large table" 64Kb */
+                               } else if (iopte_is_large(iopgd)) {
+                                       da = (i << IOSECTION_SHIFT) + (j << 
IOPTE_LARGE_SHIFT);
+                                       mask = IOPTE_LARGE_MASK;
+
+                               /* error */
+                               } else {
+                                       printk("%s Unknown table type 
0x%08x\n", mmu->name, iopte);
+                                       return -EINVAL;
+                               }
+
+                               /* translate 2-nd level entry */
+                               mmu->pagetable[i] = mmu_translate_pgentry(dom, 
iopte, da, mask);
+                       }
+
+                       continue;
+
+               /* error */
+               } else {
+                       printk("%s Unknown entry 0x%08x\n", mmu->name, iopgd);
+                       return -EINVAL;
+               }
+       }
+
+       /* force omap IOMMU to use new pagetable */
+       if (table_updated) {
+               paddr_t maddr;
+               flush_xen_dcache_va_range(mmu->pagetable, IOPGD_TABLE_SIZE);
+               maddr = __pa(mmu->pagetable);
+               writel(maddr, mmu->mem_map + MMU_TTB);
+               printk("%s update pagetable, maddr 0x%pS\n", mmu->name, 
_p(maddr));
+       }
+
+       return 0;
+}
+
+static int mmu_trap_write_access(struct domain *dom,
+                                                                struct 
mmu_info *mmu, mmio_info_t *info)
+{
+       struct cpu_user_regs *regs = guest_cpu_user_regs();
+       register_t *r = select_user_reg(regs, info->dabt.reg);
+       int res = 0;
+
+       switch (info->gpa - mmu->mem_start) {
+               case MMU_TTB:
+                       printk("%s MMU_TTB write access 0x%pS <= 0x%08x\n",
+                                  mmu->name, _p(info->gpa), *r);
+                       res = mmu_translate_pagetable(dom, mmu);
+                       break;
+               default:
+                       break;
+       }
+
+       return res;
+}
+
+static int mmu_mmio_read(struct vcpu *v, mmio_info_t *info)
+{
+       struct mmu_info *mmu = NULL;
+    struct cpu_user_regs *regs = guest_cpu_user_regs();
+    register_t *r = select_user_reg(regs, info->dabt.reg);
+
+       mmu = mmu_lookup(info->gpa);
+       if (!mmu) {
+               printk("%s: can't get mmu for addr 0x%08x\n", __func__, 
(u32)info->gpa);
+               return -EINVAL;
+       }
+
+    *r = readl(mmu->mem_map + ((u32)(info->gpa) - mmu->mem_start));
+
+    return 1;
+}
+
+static int mmu_mmio_write(struct vcpu *v, mmio_info_t *info)
+{
+       struct domain *dom = v->domain;
+       struct mmu_info *mmu = NULL;
+    struct cpu_user_regs *regs = guest_cpu_user_regs();
+    register_t *r = select_user_reg(regs, info->dabt.reg);
+       int res;
+
+       mmu = mmu_lookup(info->gpa);
+       if (!mmu) {
+               printk("%s: can't get mmu for addr 0x%08x\n", __func__, 
(u32)info->gpa);
+               return -EINVAL;
+       }
+
+       /*
+        * make sure that user register is written first in this function
+        * following calls may expect valid data in it
+        */
+    writel(*r, mmu->mem_map + ((u32)(info->gpa) - mmu->mem_start));
+
+       res = mmu_trap_write_access(dom, mmu, info);
+       if (res)
+               return res;
+
+    return 1;
+}
+
+static int mmu_init(struct mmu_info *mmu, u32 data)
+{
+       ASSERT(mmu);
+       ASSERT(!mmu->mem_map);
+       ASSERT(!mmu->pagetable);
+
+    mmu->mem_map = ioremap(mmu->mem_start, mmu->mem_size);
+       if (!mmu->mem_map) {
+               printk("%s: %s failed to map memory\n",  __func__, mmu->name);
+               return -EINVAL;
+       }
+
+       printk("%s: %s ipu_map = 0x%pS\n", __func__, mmu->name, 
_p(mmu->mem_map));
+
+       mmu->pagetable = xzalloc_bytes(IOPGD_TABLE_SIZE);
+       if (!mmu->pagetable) {
+               printk("%s: %s failed to alloc private pagetable\n",
+                          __func__, mmu->name);
+               return -ENOMEM;
+       }
+
+       printk("%s: %s private pagetable %lu bytes\n",
+                  __func__, mmu->name, IOPGD_TABLE_SIZE);
+
+       return 0;
+}
+
+static int mmu_init_all(void)
+{
+       int res;
+
+       res = mmu_for_each(mmu_init, 0);
+       if (res) {
+               printk("%s error during init %d\n", __func__, res);
+               return res;
+       }
+
+       return 0;
+}
+
+const struct mmio_handler mmu_mmio_handler = {
+       .check_handler = mmu_mmio_check,
+       .read_handler  = mmu_mmio_read,
+       .write_handler = mmu_mmio_write,
+};
+
+__initcall(mmu_init_all);
-- 
1.7.9.5




------------------------------

Message: 4
Date: Wed, 22 Jan 2014 17:52:05 +0200
From: Andrii Tseglytskyi <andrii.tseglytskyi@xxxxxxxxxxxxxxx>
To: xen-devel@xxxxxxxxxxxxx
Subject: [Xen-devel] [RFC v01 3/3] arm: omap: cleanup iopte
        allocations
Message-ID:
        <1390405925-1764-4-git-send-email-andrii.tseglytskyi@xxxxxxxxxxxxxxx>

Each allocation for iopte requires 4Kb memory.
All previous allocations from previous MMU reconfiguration
must be cleaned before new reconfigureation cycle.

Change-Id: I6db69a400cdba1170b43d9dc68d0817db77cbf9c
Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@xxxxxxxxxxxxxxx>
---
 xen/arch/arm/omap_iommu.c |   35 +++++++++++++++++++++++++++++++++++
 1 file changed, 35 insertions(+)

diff --git a/xen/arch/arm/omap_iommu.c b/xen/arch/arm/omap_iommu.c
index 7ec03a2..a5ad3ac 100644
--- a/xen/arch/arm/omap_iommu.c
+++ b/xen/arch/arm/omap_iommu.c
@@ -93,12 +93,18 @@
 #define iopte_is_large(x)      (((x) & 3) == IOPTE_LARGE)
 #define iopte_offset(x)                ((x) & IOPTE_SMALL_MASK)
 
+struct mmu_alloc_node {
+       u32                                     *vptr;
+       struct list_head        node;
+};
+
 struct mmu_info {
        const char                      *name;
        paddr_t                         mem_start;
        u32                                     mem_size;
        u32                                     *pagetable;
        void __iomem            *mem_map;
+       struct list_head        alloc_list;
 };
 
 static struct mmu_info omap_ipu_mmu = {
@@ -222,8 +228,15 @@ static u32 mmu_translate_pgentry(struct domain *dom, u32 
iopgd, u32 da, u32 mask
 static u32 mmu_iopte_alloc(struct mmu_info *mmu, struct domain *dom, u32 
iopgd, u32 sect_num)
 {
        u32 *iopte = NULL;
+       struct mmu_alloc_node *alloc_node;
        u32 i;
 
+       alloc_node = xzalloc_bytes(sizeof(struct mmu_alloc_node));
+       if (!alloc_node) {
+               printk("%s Fail to alloc vptr node\n", mmu->name);
+               return 0;
+       }
+
        iopte = xzalloc_bytes(PAGE_SIZE);
        if (!iopte) {
                printk("%s Fail to alloc 2nd level table\n", mmu->name);
@@ -238,10 +251,27 @@ static u32 mmu_iopte_alloc(struct mmu_info *mmu, struct 
domain *dom, u32 iopgd,
                iopte[i] = vaddr | IOPTE_SMALL;
        }
 
+       /* store pointer for following cleanup */
+       alloc_node->vptr = iopte;
+       list_add(&alloc_node->node, &mmu->alloc_list);
+
        flush_xen_dcache_va_range(iopte, PAGE_SIZE);
        return __pa(iopte) | IOPGD_TABLE;
 }
 
+static void mmu_cleanup_pagetable(struct mmu_info *mmu)
+{
+       struct mmu_alloc_node *mmu_alloc, *tmp;
+
+       ASSERT(mmu);
+
+       list_for_each_entry_safe(mmu_alloc, tmp, &mmu->alloc_list, node) {
+               xfree(mmu_alloc->vptr);
+               list_del(&mmu_alloc->node);
+               xfree(mmu_alloc);
+       }
+}
+
 /*
  * on boot table is empty
  */
@@ -254,6 +284,9 @@ static int mmu_translate_pagetable(struct domain *dom, 
struct mmu_info *mmu)
        ASSERT(dom);
        ASSERT(mmu);
 
+       /* free all previous allocations */
+       mmu_cleanup_pagetable(mmu);
+
        /* copy pagetable from  domain to xen */
        res = mmu_copy_pagetable(mmu);
        if (res) {
@@ -432,6 +465,8 @@ static int mmu_init(struct mmu_info *mmu, u32 data)
        printk("%s: %s private pagetable %lu bytes\n",
                   __func__, mmu->name, IOPGD_TABLE_SIZE);
 
+       INIT_LIST_HEAD(&mmu->alloc_list);
+
        return 0;
 }
 
-- 
1.7.9.5




------------------------------

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


End of Xen-devel Digest, Vol 107, Issue 352
*******************************************

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.