[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] pvops domU guest booting problem - 131GB ok, 132GB not ok.
Hey Jan, I've just started to trying to track down why a pvops 64-bit PV guest can't boot with more than 128GB of memory. The issue I am seeing is that the module loading subsystem stops working and I get this: WARNING: at /home/konrad/ssd/linux/mm/vmalloc.c:107 vmap_page_range_noflush+0x328/0x3a0() Modules linked in: Pid: 1051, comm: modprobe Not tainted 3.4.0-rc6upstream-00024-g3bfd88d-dirty #1 Call Trace: [<ffffffff810704da>] warn_slowpath_common+0x7a/0xb0 [<ffffffff8102fcd9>] ? __raw_callee_save_xen_pmd_val+0x11/0x1e [<ffffffff81070525>] warn_slowpath_null+0x15/0x20 [<ffffffff81129e68>] vmap_page_range_noflush+0x328/0x3a0 [<ffffffff81129f1d>] vmap_page_range+0x3d/0x60 [<ffffffff81129f6d>] map_vm_area+0x2d/0x50 [<ffffffff8112b3a0>] __vmalloc_node_range+0x160/0x250 [<ffffffff810c1a39>] ? module_alloc_update_bounds+0x19/0x80 [<ffffffff810c2856>] ? load_module+0x66/0x19b0 [<ffffffff8105badc>] module_alloc+0x5c/0x60 [<ffffffff810c1a39>] ? module_alloc_update_bounds+0x19/0x80 [<ffffffff810c1a39>] module_alloc_update_bounds+0x19/0x80 [<ffffffff810c3783>] load_module+0xf93/0x19b0 [<ffffffff810c41fa>] sys_init_module+0x5a/0x220 [<ffffffff815ad739>] system_call_fastpath+0x16/0x1b ---[ end trace efd7fe3e15953dc6 ]--- vmalloc: allocation failure, allocated 16384 of 20480 bytes modprobe: page allocation failure: order:0, mode:0xd2 Pid: 1051, comm: modprobe Tainted: G W 3.4.0-rc6upstream-00024-g3bfd88d-dirty #1 Call Trace: [<ffffffff8110389b>] warn_alloc_failed+0xeb/0x130 [<ffffffff81129f24>] ? vmap_page_range+0x44/0x60 [<ffffffff8112b456>] __vmalloc_node_range+0x216/0x250 [<ffffffff810c1a39>] ? module_alloc_update_bounds+0x19/0x80 [<ffffffff810c2856>] ? load_module+0x66/0x19b0 [<ffffffff8105badc>] module_alloc+0x5c/0x60 [<ffffffff810c1a39>] ? module_alloc_update_bounds+0x19/0x80 [<ffffffff810c1a39>] module_alloc_update_bounds+0x19/0x80 [<ffffffff810c3783>] load_module+0xf93/0x19b0 [<ffffffff810c41fa>] sys_init_module+0x5a/0x220 [<ffffffff815ad739>] system_call_fastpath+0x16/0x1b Mem-Info: Node 0 DMA per-cpu: CPU 0: hi: 0, btch: 1 usd: 0 Node 0 DMA32 per-cpu: CPU 0: hi: 186, btch: 31 usd: 168 Node 0 Normal per-cpu: CPU 0: hi: 186, btch: 31 usd: 73 active_anon:253 inactive_anon:31970 isolated_anon:0 active_file:391 inactive_file:27806 isolated_file:0 unevictable:0 dirty:0 writeback:0 unstable:0 free:33525733 slab_reclaimable:1660 slab_unreclaimable:976 mapped:340 shmem:31960 pagetables:34 bounce:0 Node 0 DMA free:8760kB min:0kB low:0kB high:0kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:8536kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no lowmem_reserve[]: 0 4024 132160 132160 Node 0 DMA32 free:3637796kB min:1416kB low:1768kB high:2124kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:4120800kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no lowmem_reserve[]: 0 0 128135 128135 Node 0 Normal free:130456376kB min:45112kB low:56388kB high:67668kB active_anon:1012kB inactive_anon:127880kB active_file:1564kB inactive_file:111224kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:131211120kB mlocked:0kB dirty:0kB writeback:0kB mapped:1360kB shmem:127840kB slab_reclaimable:6640kB slab_unreclaimable:3904kB kernel_stack:368kB pagetables:136kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no lowmem_reserve[]: 0 0 0 0 Node 0 DMA: 2*4kB 2*8kB 0*16kB 3*32kB 3*64kB 2*128kB 2*256kB 1*512kB 3*1024kB 2*2048kB 0*4096kB = 8760kB Node 0 DMA32: 5*4kB 2*8kB 4*16kB 4*32kB 3*64kB 3*128kB 3*256kB 4*512kB 1*1024kB 0*2048kB 887*4096kB = 3637796kB Node 0 Normal: 26*4kB 26*8kB 14*16kB 11*32kB 11*64kB 6*128kB 8*256kB 7*512kB 9*1024kB 3*2048kB 31844*4096kB = 130456376kB 60151 total pagecache pages 0 pages in swap cache Swap cache stats: add 0, delete 0, find 0/0 Free swap = 0kB Total swap = 0kB 34306032 pages RAM 612977 pages reserved 983 pages shared 166525 pages non-shared Which tells me that there is more than enough memory. So my thinking is that it is either: - it can't stick the new pagetables in the memory b/c there isn't enough physical space in the region it wants - but the region it uses is Normal, so that should be OK? - it is hitting some page tables that are used by the hypervisor? Was wondering if you had hit this at some point with SLES guests and if there are any ideas of what I should look for ? Thanks! Attachment:
131gb.txt Attachment:
133-gb-bad.txt _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |