WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: [PATCH] xen: partially revert "xen: set max_pfn_mapped t

To: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
Subject: [Xen-devel] Re: [PATCH] xen: partially revert "xen: set max_pfn_mapped to the last pfn mapped"
From: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Date: Tue, 7 Jun 2011 10:47:38 -0400
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Tue, 07 Jun 2011 07:48:46 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <alpine.DEB.2.00.1106071533330.12963@kaball-desktop>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <alpine.DEB.2.00.1106071533330.12963@kaball-desktop>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.21 (2010-09-15)
On Tue, Jun 07, 2011 at 03:35:05PM +0100, Stefano Stabellini wrote:
> We only need to set max_pfn_mapped to the last pfn mapped on x86_64 to
> make sure that cleanup_highmap doesn't remove important mappings at
> _end.
> 
> We don't need to do this on x86_32 because cleanup_highmap is not called
> on x86_32. Besides lowering max_pfn_mapped on x86_32 has the unwanted
> side effect of limiting the amount of memory available for the 1:1
> kernel pagetable allocation.
> 
> This patch reverts the x86_32 part of the original patch.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
> 
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index dc708dc..afe1d54 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -1599,6 +1599,11 @@ static void __init xen_map_identity_early(pmd_t *pmd, 
> unsigned long max_pfn)
>               for (pteidx = 0; pteidx < PTRS_PER_PTE; pteidx++, pfn++) {
>                       pte_t pte;
>  
> +#ifdef CONFIG_X86_32
> +                     if (pfn > max_pfn_mapped)
> +                             max_pfn_mapped = pfn;
> +#endif
> +
>                       if (!pte_none(pte_page[pteidx]))
>                               continue;
>  
> @@ -1766,7 +1771,9 @@ pgd_t * __init xen_setup_kernel_pagetable(pgd_t *pgd,
>       initial_kernel_pmd =
>               extend_brk(sizeof(pmd_t) * PTRS_PER_PMD, PAGE_SIZE);
>  
> -     max_pfn_mapped = PFN_DOWN(__pa(xen_start_info->mfn_list));
> +     max_pfn_mapped = PFN_DOWN(__pa(xen_start_info->pt_base) +
> +                               xen_start_info->nr_pt_frames * PAGE_SIZE +
> +                               512*1024);

The x86_64 has about the same max_pfn_mapped value, but it does not have
that extra 512kbytes. What is that for? Perhaps we should provide a comment
to explain what that extra memory mapped is for?
>  
>       kernel_pmd = m2v(pgd[KERNEL_PGD_BOUNDARY].pgd);
>       memcpy(initial_kernel_pmd, kernel_pmd, sizeof(pmd_t) * PTRS_PER_PMD);

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>