On Tue, Jun 07, 2011 at 03:35:05PM +0100, Stefano Stabellini wrote:
> We only need to set max_pfn_mapped to the last pfn mapped on x86_64 to
> make sure that cleanup_highmap doesn't remove important mappings at
> _end.
>
> We don't need to do this on x86_32 because cleanup_highmap is not called
> on x86_32. Besides lowering max_pfn_mapped on x86_32 has the unwanted
> side effect of limiting the amount of memory available for the 1:1
> kernel pagetable allocation.
>
> This patch reverts the x86_32 part of the original patch.
>
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
>
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index dc708dc..afe1d54 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -1599,6 +1599,11 @@ static void __init xen_map_identity_early(pmd_t *pmd,
> unsigned long max_pfn)
> for (pteidx = 0; pteidx < PTRS_PER_PTE; pteidx++, pfn++) {
> pte_t pte;
>
> +#ifdef CONFIG_X86_32
> + if (pfn > max_pfn_mapped)
> + max_pfn_mapped = pfn;
> +#endif
> +
> if (!pte_none(pte_page[pteidx]))
> continue;
>
> @@ -1766,7 +1771,9 @@ pgd_t * __init xen_setup_kernel_pagetable(pgd_t *pgd,
> initial_kernel_pmd =
> extend_brk(sizeof(pmd_t) * PTRS_PER_PMD, PAGE_SIZE);
>
> - max_pfn_mapped = PFN_DOWN(__pa(xen_start_info->mfn_list));
> + max_pfn_mapped = PFN_DOWN(__pa(xen_start_info->pt_base) +
> + xen_start_info->nr_pt_frames * PAGE_SIZE +
> + 512*1024);
The x86_64 has about the same max_pfn_mapped value, but it does not have
that extra 512kbytes. What is that for? Perhaps we should provide a comment
to explain what that extra memory mapped is for?
>
> kernel_pmd = m2v(pgd[KERNEL_PGD_BOUNDARY].pgd);
> memcpy(initial_kernel_pmd, kernel_pmd, sizeof(pmd_t) * PTRS_PER_PMD);
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|