[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC 21/29] xen/arm: WORKAROUND 1:1 memory mapping for dom0



On 04/29/2013 05:13 PM, Ian Campbell wrote:

> On Mon, 2013-04-29 at 00:02 +0100, Julien Grall wrote:
>> Currently xen doesn't implement SYS MMU. When a device will talk with dom0
>> with DMA request the domain will use GFN instead of MFN.
>> For instance on the arndale board, without this patch the network doesn't
>> work.
> 
> Yes :-/ Pragmatically I think we are better off with this short term
> hack than without support for the Arndale, but it's not terribly
> satisfactory. If there were any other platforms available I'd say that
> we should pick a platform for which the IOMMU docs were more easily
> forthcoming than they have proven to be on this platform.
> 
> I'm slightly worried that we may get stuck with this hack. Perhaps we
> should say, prominently, that this hack will go away in 4.4 and that
> platforms which have not been converted to an IOMMU will not be
> supported from 4.4 onwards and that, yes, this could include the arndale
> unless docs and/or a vendor written driver appear.
> 
> Of course we may end up eating crow if all the other platforms have the
> same problem, since we clearly can't remove support for all platforms...
> 
>> The 1:1 mapping is a workaround and MUST be remove as soon as a SYS MMU is
>> implemented in XEN.
> 
> I wonder if we can make this conditional on a suitable board level DT
> compat node ("samsung,arndale" or "samsung,exynos5250" perhaps)? And
> print a big warning when enabling it of course.


What do you think about adding a workaround field in platform structure?
It will contains PLATFORM_WORKAROUND_DOM0_11_MAPPING and perhaps other
workaround if we really need it...

This could avoid to check DT compat node for each platform where Xen
need to do a 1:1 mapping for dom0.

>> Signed-off-by: Julien Grall <julien.grall@xxxxxxxxxx>
>> ---
>>  xen/arch/arm/domain_build.c |   51 
>> ++++++++++++++++++++++++-------------------
>>  xen/arch/arm/kernel.h       |    1 -
>>  2 files changed, 28 insertions(+), 24 deletions(-)
>>
>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>> index ced73a7..11298e1 100644
>> --- a/xen/arch/arm/domain_build.c
>> +++ b/xen/arch/arm/domain_build.c
>> @@ -66,29 +66,36 @@ static int set_memory_reg(struct domain *d, struct 
>> kernel_info *kinfo,
>>                            int address_cells, int size_cells, u32 *new_cell)
>>  {
>>      int reg_size = (address_cells + size_cells) * sizeof(*cell);
>> -    int l = 0;
>> -    u64 start;
>> -    u64 size;
>> +    paddr_t start;
>> +    paddr_t size;
>> +    struct page_info *pg;
>> +    unsigned int order = get_order_from_bytes(dom0_mem);
>> +    int res;
>> +    paddr_t spfn;
>>  
>> -    while ( kinfo->unassigned_mem > 0 && l + reg_size <= len
>> -            && kinfo->mem.nr_banks < NR_MEM_BANKS )
>> -    {
>> -        device_tree_get_reg(&cell, address_cells, size_cells, &start, 
>> &size);
>> -        if ( size > kinfo->unassigned_mem )
>> -            size = kinfo->unassigned_mem;
>> -        device_tree_set_reg(&new_cell, address_cells, size_cells, start, 
>> size);
>> -
>> -        printk("Populate P2M %#"PRIx64"->%#"PRIx64"\n", start, start + 
>> size);
>> -        p2m_populate_ram(d, start, start + size);
>> -        kinfo->mem.bank[kinfo->mem.nr_banks].start = start;
>> -        kinfo->mem.bank[kinfo->mem.nr_banks].size = size;
>> -        kinfo->mem.nr_banks++;
>> -        kinfo->unassigned_mem -= size;
>> -
>> -        l += reg_size;
>> -    }
>> +    pg = alloc_domheap_pages(d, order, 0);
>> +    if ( !pg )
>> +        panic("Failed to allocate contiguous memory for dom0\n");
> 
> Interesting, so we don't actually need RAM to start at the same place as
> the real hardware would have it and the kernel copes? Slight surprising
> the kernel didn't whine, but OK!

I didn't see specific warning from the kernel. Why the kernel should
whine if the memory banks have changed?

-- 
Julien

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.