[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] x86-64's contig_initmem_init



Jan Beulich wrote:
>> The tail part of the initial mapping has no special handling on i386
>> nor on x86_64. It just gets freed up when we free from 0 up to
>> max_pfn, and it never gets reserved (the reserved region precisely
>> covers kernel text/data and initial page tables).
> 
> For i386 I'm not certain, but for x86-64 I doubt that:
> init_memory_mapping, which runs before contig_initmem_init,
> re-initializes start_pfn (which is what in turn gets used to set up
> the bootmem reservation) from the result of scanning the initial page
> tables. These, as I understand it, extend to the 4-Mb-rounded end of
> the initial mapping (which, if the unused tail turns out to be less
> than 512k, even gets extended by an extra 4M).
> 

Okay, I wrote the code originally. It's extended to access all the pge,
pud, pmd, pte pages when establishing 1:1 direct mapping against the
guest physical memory. Unlike the native x86_64 linux, the current
x86_64 xenlinux cannot use 2MB, so we need to allocate a lot of (extra)
L1 pages if the guest memory is large. For those page table pages, I set
RO at contig_initmem_init. 

I don't think it's 4-Mb-rounded, but I'll take a look at the code.

BTW, when do/did you start seeing the problme?


>> Actually, that could be another bug on x86/64 -- I may need to
>> truncate the initial mapping, or we may be ending up with spurious
>> extra writable mappings to some pages... I'll take a look and see if
>> this is 
> 
>> the case.
> 
> If the above wasn't true (or was fixed), then I'd assume such a bug
> would surface (and again I'm not sure why i386 wouldn't surface it,
> as I can't see where these mappings get torn down).
> 
> Jan
> 

Jun
---
Intel Open Source Technology Center

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.