[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC v01 1/3] arm: omap: introduce iommu module



Hi Stefano,

On Thu, Jan 23, 2014 at 4:39 PM, Stefano Stabellini
<stefano.stabellini@xxxxxxxxxxxxx> wrote:
> On Wed, 22 Jan 2014, Andrii Tseglytskyi wrote:
>> omap IOMMU module is designed to handle access to external
>> omap MMUs, connected to the L3 bus.
>>
...
>> @@ -26,6 +26,7 @@ static const struct mmio_handler *const mmio_handlers[] =
>>  {
>>      &vgic_distr_mmio_handler,
>>      &vuart_mmio_handler,
>> +     &mmu_mmio_handler,
>>  };
>>  #define MMIO_HANDLER_NR ARRAY_SIZE(mmio_handlers)
>
> I think that omap_iommu should be a platform specific driver, and it
> should hook into a set of platform specific mmio_handlers instead of
> using the generic mmio_handler structure.
>

Agree.

...

>> diff --git a/xen/arch/arm/io.h b/xen/arch/arm/io.h
>> index 8d252c0..acb5dff 100644
>> --- a/xen/arch/arm/io.h
>> +++ b/xen/arch/arm/io.h
>> @@ -42,6 +42,7 @@ struct mmio_handler {
>>
>>  extern const struct mmio_handler vgic_distr_mmio_handler;
>>  extern const struct mmio_handler vuart_mmio_handler;
>> +extern const struct mmio_handler mmu_mmio_handler;
>>
>>  extern int handle_mmio(mmio_info_t *info);
>>
>> diff --git a/xen/arch/arm/omap_iommu.c b/xen/arch/arm/omap_iommu.c
>> new file mode 100644
>> index 0000000..4dab30f
>> --- /dev/null
>> +++ b/xen/arch/arm/omap_iommu.c
>
> It should probably live under xen/arch/arm/platforms.
>

Agree.

...

>> +static int mmu_copy_pagetable(struct mmu_info *mmu)
>> +{
>> +     void __iomem *pagetable = NULL;
>> +     u32 pgaddr;
>> +
>> +     ASSERT(mmu);
>> +
>> +     /* read address where kernel MMU pagetable is stored */
>> +     pgaddr = readl(mmu->mem_map + MMU_TTB);
>> +     pagetable = ioremap(pgaddr, IOPGD_TABLE_SIZE);
>> +     if (!pagetable) {
>
> Xen uses a different coding style from Linux, see CODING_STYLE.
>

Sure. Thank you. Will update my editor settings accordingly to fit Xen
coding style.

...
>
>> +             printk("%s: %s failed to map pagetable\n",
>> +                        __func__, mmu->name);
>> +             return -EINVAL;
>> +     }
>> +
>> +     /*
>> +      * pagetable can be changed since last time
>> +      * we accessed it therefore we need to copy it each time
>> +      */
>> +     memcpy(mmu->pagetable, pagetable, IOPGD_TABLE_SIZE);
>> +
>> +     iounmap(pagetable);
>
> Do you need to flush the dcache here?
>

No, it is copying from kernel to Xen. Kernel already has valid
pagetable in it physical memory, when this call happens. Kernel makes
sure of this before accessing MMU configuration register. The only
reason to flush cache here if kernel mmu driver will be modified
somehow. This may be the point.

...

>> +
>> +             /* error */
>> +             } else {
>> +                     printk("%s Unknown entry 0x%08x\n", mmu->name, iopgd);
>> +                     return -EINVAL;
>> +             }
>> +     }
>> +
>> +     /* force omap IOMMU to use new pagetable */
>> +     if (table_updated) {
>> +             paddr_t maddr;
>> +             flush_xen_dcache_va_range(mmu->pagetable, IOPGD_TABLE_SIZE);
>
> So you are flushing the dcache all at once at the end, probably better
> this way.

Right. After all translations complete I flush caches. This guarantees
that phys memory will contain valid data for MMU.

 ...

>> +    writel(*r, mmu->mem_map + ((u32)(info->gpa) - mmu->mem_start));
>> +
>> +     res = mmu_trap_write_access(dom, mmu, info);
>> +     if (res)
>> +             return res;
>> +
>> +    return 1;
>> +}
>
> I wonder if we actually need to trap guest accesses in all cases: if we
> leave the GPU/IPU in Dom0, mapped 1:1, then we don't need any traps.
> Maybe we can find a way to detect that so that we can avoid trapping and
> translating in that case.
>

If we leave the GPU/IPU in Dom0 with 1:1 mapping we won't need any
translation. But for now we need this in DomU, where 1:1 mapping will
not be available.

>> +
>> +const struct mmio_handler mmu_mmio_handler = {
>> +     .check_handler = mmu_mmio_check,
>> +     .read_handler  = mmu_mmio_read,
>> +     .write_handler = mmu_mmio_write,
>> +};
>> +
>> +__initcall(mmu_init_all);
>> --
>> 1.7.9.5
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxx
>> http://lists.xen.org/xen-devel
>>

Thank you for review,

Regards,
Andrii

-- 

Andrii Tseglytskyi | Embedded Dev
GlobalLogic
www.globallogic.com

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.