WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] Kernel BUG atarch/x86_64/mm/../../i386/mm/hypervisor.c:1

Am Mittwoch, den 04.10.2006, 02:19 +0200 schrieb Christophe Saout:

Update:

When running on 4GB of total memory instead of 12GB, everything is just
fine. (the three virtual machines, Dom0 + 2 x DomU are assigned 1GB of
memory each, in both test runs). Does that help?

If you have any ideas where I should do more debugging, please tell me.
We would really like to get this machine going.

> Oct  3 23:27:28 tuek BUG: soft lockup detected on CPU#0!
> Oct  3 23:27:28 tuek CPU 0:
> Oct  3 23:27:28 tuek Modules linked in: nfsd exportfs
> Oct  3 23:27:28 tuek Pid: 3988, comm: gmetad Not tainted 2.6.16.29-xen-xenU #2
> Oct  3 23:27:28 tuek RIP: e030:[<ffffffff8010722a>] 
> <ffffffff8010722a>{hypercall_page+554}
> Oct  3 23:27:28 tuek RSP: e02b:ffff88003e32f9e0  EFLAGS: 00000246
> Oct  3 23:27:28 tuek RAX: 0000000000030000 RBX: ffff8800017ea448 RCX: 
> ffffffff8010722a
> Oct  3 23:27:28 tuek RDX: ffffffffff5fd000 RSI: 0000000000000000 RDI: 
> 0000000000000000
> Oct  3 23:27:28 tuek RBP: ffff88003e32f9f8 R08: 0000000000000000 R09: 
> 0000000000000000
> Oct  3 23:27:28 tuek R10: 0000000000007ff0 R11: 0000000000000246 R12: 
> 0000000000001000
> Oct  3 23:27:28 tuek R13: ffff88003e32fd38 R14: 0000000000005000 R15: 
> 0000000000000002
> Oct  3 23:27:28 tuek FS:  00002aeaaa684b00(0000) GS:ffffffff804bf000(0000) 
> knlGS:0000000000000000
> Oct  3 23:27:28 tuek CS:  e033 DS: 0000 ES: 0000
> Oct  3 23:27:28 tuek 
> Oct  3 23:27:28 tuek Call Trace: <ffffffff802dc47e>{force_evtchn_callback+14}
> Oct  3 23:27:28 tuek <ffffffff803d4ab6>{do_page_fault+214} 
> <ffffffff8010b6fb>{error_exit+0}
> Oct  3 23:27:28 tuek <ffffffff8010b6fb>{error_exit+0} 
> <ffffffff8014f50e>{file_read_actor+62}
> Oct  3 23:27:28 tuek <ffffffff8014f57c>{file_read_actor+172} 
> <ffffffff8014d19c>{do_generic_mapping_read+412}
> Oct  3 23:27:28 tuek <ffffffff8014f4d0>{file_read_actor+0} 
> <ffffffff8014dce8>{__generic_file_aio_read+424}
> Oct  3 23:27:28 tuek <ffffffff8014dd98>{generic_file_aio_read+56} 
> <ffffffff801f8f51>{nfs_file_read+129}
> Oct  3 23:27:28 tuek <ffffffff80172dd0>{do_sync_read+240} 
> <ffffffff80161981>{vma_link+129}
> Oct  3 23:27:28 tuek <ffffffff80140500>{autoremove_wake_function+0} 
> <ffffffff80162b02>{do_mmap_pgoff+1458}
> Oct  3 23:27:28 tuek <ffffffff8017381b>{vfs_read+187} 
> <ffffffff80173ce0>{sys_read+80}
> Oct  3 23:27:28 tuek <ffffffff8010afbe>{system_call+134} 
> <ffffffff8010af38>{system_call+0}
> 
> Oct  3 23:27:52 tuek Bad page state in process 'bash'
> Oct  3 23:27:52 tuek page:ffff880001c72bc8 flags:0x0000000000000000 
> mapping:0000000000000000 mapcount:1 count:1
> Oct  3 23:27:52 tuek Trying to fix it up, but a reboot is needed
> Oct  3 23:27:52 tuek Backtrace:
> Oct  3 23:27:52 tuek 
> Oct  3 23:27:52 tuek Call Trace: <ffffffff801512ad>{bad_page+93} 
> <ffffffff80151d57>{get_page_from_freelist+775}
> Oct  3 23:27:52 tuek <ffffffff80151f1d>{__alloc_pages+157} 
> <ffffffff80152249>{get_zeroed_page+73}
> Oct  3 23:27:52 tuek <ffffffff80158cf4>{__pmd_alloc+36} 
> <ffffffff8015e55e>{copy_page_range+1262}
> Oct  3 23:27:52 tuek <ffffffff802a6bea>{rb_insert_color+250} 
> <ffffffff80127cb7>{copy_process+3079}
> Oct  3 23:27:52 tuek <ffffffff80128c8e>{do_fork+238} 
> <ffffffff801710d6>{fd_install+54}
> Oct  3 23:27:52 tuek <ffffffff80134e8c>{sigprocmask+220} 
> <ffffffff8010afbe>{system_call+134}
> Oct  3 23:27:52 tuek <ffffffff801094b3>{sys_clone+35} 
> <ffffffff8010b3e9>{ptregscall_common+61}
> 
> Oct  3 23:27:52 tuek ----------- [cut here ] --------- [please bite here ] 
> ---------
> Oct  3 23:27:52 tuek Kernel BUG at 
> arch/x86_64/mm/../../i386/mm/hypervisor.c:198
> Oct  3 23:27:52 tuek invalid opcode: 0000 [1] SMP 
> Oct  3 23:27:52 tuek CPU 3 
> Oct  3 23:27:52 tuek Modules linked in: nfsd exportfs
> Oct  3 23:27:52 tuek Pid: 4617, comm: bash Tainted: G    B 2.6.16.29-xen-xenU 
> #2
> Oct  3 23:27:52 tuek RIP: e030:[<ffffffff80117cb5>] 
> <ffffffff80117cb5>{xen_pgd_pin+85}
> Oct  3 23:27:52 tuek RSP: e02b:ffff880038ed9d58  EFLAGS: 00010282
> Oct  3 23:27:52 tuek RAX: 00000000ffffffea RBX: ffff880000e098c0 RCX: 
> 000000000001dc48
> Oct  3 23:27:52 tuek RDX: 0000000000000000 RSI: 0000000000000001 RDI: 
> ffff880038ed9d58
> Oct  3 23:27:52 tuek RBP: ffff880038ed9d78 R08: ffff880038e7fff8 R09: 
> ffff880038e7fff8
> Oct  3 23:27:52 tuek R10: 0000000000007ff0 R11: ffff880002d39008 R12: 
> 0000000000000000
> Oct  3 23:27:52 tuek R13: ffff8800006383c0 R14: 0000000001200011 R15: 
> ffff8800006383c0
> Oct  3 23:27:52 tuek FS:  00002afecc63ae60(0000) GS:ffffffff804bf180(0000) 
> knlGS:0000000000000000
> Oct  3 23:27:52 tuek CS:  e033 DS: 0000 ES: 0000
> Oct  3 23:27:52 tuek Process bash (pid: 4617, threadinfo ffff880038ed8000, 
> task ffff88003f9e0180)
> Oct  3 23:27:52 tuek Stack: 0000000000000003 00000000001b3aa7 
> 0000000001200011 ffff880002d39008 
> Oct  3 23:27:52 tuek ffff880038ed9d98 ffffffff80117543 0000000000000000 
> ffff88003ca4ea28 
> Oct  3 23:27:52 tuek ffff880038ed9da8 ffffffff801175f2 
> Oct  3 23:27:52 tuek Call Trace: <ffffffff80117543>{mm_pin+387} 
> <ffffffff801175f2>{_arch_dup_mmap+18}
> Oct  3 23:27:52 tuek <ffffffff80127cf6>{copy_process+3142} 
> <ffffffff80128c8e>{do_fork+238}
> Oct  3 23:27:52 tuek <ffffffff801710d6>{fd_install+54} 
> <ffffffff80134e8c>{sigprocmask+220}
> Oct  3 23:27:52 tuek <ffffffff8010afbe>{system_call+134} 
> <ffffffff801094b3>{sys_clone+35}
> Oct  3 23:27:52 tuek <ffffffff8010b3e9>{ptregscall_common+61}
> Oct  3 23:27:52 tuek 
> Oct  3 23:27:52 tuek Code: 0f 0b 68 38 d7 3f 80 c2 c6 00 90 c9 c3 0f 1f 80 00 
> 00 00 00 
> Oct  3 23:27:52 tuek RIP <ffffffff80117cb5>{xen_pgd_pin+85} RSP 
> <ffff880038ed9d58>

Attachment: signature.asc
Description: Dies ist ein digital signierter Nachrichtenteil

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel