[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] multicalls.c warning in xen_mc_flush



On Tue, May 29, 2012 at 01:39:39PM +0200, William Dauchy wrote:
> On Fri, May 25, 2012 at 11:01 PM, Konrad Rzeszutek Wilk
> <konrad.wilk@xxxxxxxxxx> wrote:
> > Not yet. Could you ping me in  week say please?
> 
> ping.

Pls try the attached patch.

>From e4c315c0c3d842712ae64ec95c099fd44e65291a Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Date: Tue, 29 May 2012 21:38:22 -0400
Subject: [PATCH] x86/i386: Check PSE bit before using PAGE_KERNEL_LARGE.

During bootup we would unconditionally do this on non-NUMA machines:

setup_arch
  \-initmem_init
      \-x86_numa_init (with dummy_init as callback)
          \- init_alloc_remap
               \- set_pmd_pfn (with PAGE_PSE)

without checking to see if the CPU supports PSE. This
patch adds that and also allows the init_alloc_remap function
to properly work by falling back on PTEs.

This bug has been observed when running an i386 PV Xen
guest with CONFIG_NUMA built in - but it should be also
easily observed on other CPUs which do not expose the PSE support.

We would get this in the guest:

memblock_reserve: [0x0000002ac00000-0x0000002be00000] 
init_alloc_remap+0x195/0x251
------------[ cut here ]------------
WARNING: at /home/konrad/ssd/linux/arch/x86/xen/multicalls.c:129 
xen_mc_flush+0x160/0x1e0()
Modules linked in:
Pid: 0, comm: swapper Not tainted 3.4.0-08268-gc0b1dd2 #1
Call Trace:
 [<c107b62d>] warn_slowpath_common+0x6d/0xa0
 [<c10380a0>] ? xen_mc_flush+0x160/0x1e0
 [<c10380a0>] ? xen_mc_flush+0x160/0x1e0
 [<c107b67d>] warn_slowpath_null+0x1d/0x20
 [<c10380a0>] xen_mc_flush+0x160/0x1e0
 [<c103a46d>] xen_set_pmd_hyper+0xad/0x170
 [<c103896d>] ? pte_pfn_to_mfn+0xad/0xc0
 [<c1074b2e>] set_pmd_pfn+0x9e/0xf0
 [<c172290d>] init_alloc_remap+0x1e3/0x251
 [<c1722325>] x86_numa_init+0x340/0x65e
 [<c103c7fe>] ? __raw_callee_save_xen_restore_fl+0x6/0x8
 [<c172265f>] initmem_init+0xb/0xd6
 [<c1719428>] ? acpi_boot_table_init+0x10/0x7d
 [<c1712dd1>] setup_arch+0xb9c/0xc8a
 [<c103c7fe>] ? __raw_callee_save_xen_restore_fl+0x6/0x8
 [<c170c8fb>] start_kernel+0xbe/0x395
 [<c170c306>] i386_start_kernel+0xa9/0xb0
 [<c170f86c>] xen_start_kernel+0x632/0x63a
 [<c1409078>] ? tmem_objnode_alloc+0x28/0xa0
---[ end trace a7919e7f17c0a725 ]---
------------[ cut here ]------------

with the hypervisor telling us:
(XEN) mm.c:943:d0 Attempt to map superpage without allowsuperpage flag in 
hypervisor

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
---
 arch/x86/mm/pgtable_32.c |   29 ++++++++++++++++++++++++++++-
 1 files changed, 28 insertions(+), 1 deletions(-)

diff --git a/arch/x86/mm/pgtable_32.c b/arch/x86/mm/pgtable_32.c
index a69bcb8..32085ec 100644
--- a/arch/x86/mm/pgtable_32.c
+++ b/arch/x86/mm/pgtable_32.c
@@ -86,7 +86,34 @@ void set_pmd_pfn(unsigned long vaddr, unsigned long pfn, 
pgprot_t flags)
        }
        pud = pud_offset(pgd, vaddr);
        pmd = pmd_offset(pud, vaddr);
-       set_pmd(pmd, pfn_pmd(pfn, flags));
+
+       if (cpu_has_pse)
+               set_pmd(pmd, pfn_pmd(pfn, flags));
+       else {
+               pgprot_t new_flag = PAGE_KERNEL;
+               pte_t *pte;
+               int i;
+
+               /*
+                * This is run _after_ initial memory mapped so the
+                * PTE page are allocated - but we check it just in case.
+                */
+               if (pmd_none(*pmd)) {
+                       printk(KERN_WARNING "set_pmd_pfn: pmd_none\n");
+                       return;
+               }
+
+               pte = (pte_t *)pmd_page_vaddr(*pmd);
+               for (i = 0; i < PTRS_PER_PTE; i++) {
+                       if (pte_none(*pte)) {
+                               printk(KERN_WARNING "set_pmd_pfn: pte_none\n");
+                               return;
+                       }
+                       set_pte(pte, pfn_pte(pfn + i, new_flag));
+                       pte++;
+               }
+       }
+
        /*
         * It's enough to flush this one mapping.
         * (PGE mappings get flushed as well)
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.