[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Vmap allocator fails to allocate beyond 128MB



>>> On 26.09.14 at 17:23, <vijay.kilari@xxxxxxxxx> wrote:
> Hi Jan,
> 
> On Fri, Sep 26, 2014 at 6:16 PM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
>>>>> On 26.09.14 at 14:17, <vijay.kilari@xxxxxxxxx> wrote:
>>>   When devices like SMMU request large ioremap space and if the total
>>> allocation
>>> of vmap space is beyond 128MB the allocation fails for next requests
>>> and following warning is seen
>>>
>>> create_xen_entries: trying to replace an existing mapping
>>> addr=40001000 mfn=fffd6
>>>
>>> I found that vm_top is allocated with only 1 page which can hold
>>> bitmap for only 128MB
>>> space though 1GB of vmap space is assigned.
>>>
>>> With 1GB vmap space following are the calculations
>>>
>>> vm_base = 0x4000000
>>> vm_end = 0x3ffff
>>> vm_low = 0x8
>>> nr = 1
>>> vm_top = 0x8000
>>>
>>> With the below patch, I could get allocations beyond 128MB.
>>>
>>> where nr = 8 for 1GB vmap space
>>>
>>> diff --git a/xen/common/vmap.c b/xen/common/vmap.c
>>> index 783cea3..369212d 100644
>>> --- a/xen/common/vmap.c
>>> +++ b/xen/common/vmap.c
>>> @@ -27,7 +27,7 @@ void __init vm_init(void)
>>>      vm_base = (void *)VMAP_VIRT_START;
>>>      vm_end = PFN_DOWN(arch_vmap_virt_end() - vm_base);
>>>      vm_low = PFN_UP((vm_end + 7) / 8);
>>> -    nr = PFN_UP((vm_low + 7) / 8);
>>> +    nr = PFN_UP((vm_end + 7) / 8);
>>>      vm_top = nr * PAGE_SIZE * 8;
>>>
>>>      for ( i = 0, va = (unsigned long)vm_bitmap; i < nr; ++i, va += 
>>> PAGE_SIZE 
> )
>>
>> Maybe there's a bug somewhere, but what you suggest as a change
>> above doesn't look correct: You make nr == vm_low, and hence the
>> map_pages_to_xen() after the loop do nothing. That can't be right.
>> Is it perhaps that this second map_pages_to_xen() doesn't have the
>> intended effect on ARM?
> 
> Note: I am testing on arm64 platform.
> 
> In the call map_pages_to_xen() after for loop is performing the mapping
> for next vm_bitmap pages. In case of arm  this call will set valid bit
> is set to 1 in pte
> entry for this mapping.

So _that_ is the bug then, because ...

> void __init vm_init(void)
> {
>      ....
>      for ( i = 0, va = (unsigned long)vm_bitmap; i < nr; ++i, va += PAGE_SIZE 
> )
>      {
>          struct page_info *pg = alloc_domheap_page(NULL, 0);
> 
>          map_pages_to_xen(va, page_to_mfn(pg), 1, PAGE_HYPERVISOR);
>          clear_page((void *)va);
>      }
>      bitmap_fill(vm_bitmap, vm_low);
> 
>      /* Populate page tables for the bitmap if necessary. */
>      map_pages_to_xen(va, 0, vm_low - nr, MAP_SMALL_PAGES);

... here we don't request any valid leaf entries to be created. All we
want are the non-leaf page table structures.

> Queries: 1) How is x86 is updating tables even if present/valid bit is set?

When not asked to set the present bit, it also doesn't set it, and
hence also doesn't find any collisions later.

>              2) Can we allocate all the pages required for vm_bitmap
> in vm_init()?. we may be wasting few pages but
>                 this makes work for both x86&arm.

No - we shouldn't be wasting memory here.

>              3) Can we split vm_init() into generic and arch specific?

That would be kind of a last resort.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.