[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] crash with xen/stable-2.6.32.x



On Fri, 2 Apr 2010, Yu, Ke wrote:

BTW, does xen/master branch or bare mental works in this machine? And in the i686 box that works, can 'xenpm get-cpuidle-state' show correct Cx information?

Running the same kernel without the xen hypervisor gives the warning below though the system appears to function normally. I can run a xen kernel built from xen/stable from about a week ago as a dom0, and that gives for xenpm get-cpuidle-state

Max C-state: C7

cpu id               : 0
total C-states       : 0
idle time(ms)        : 0

cpu id               : 1
total C-states       : 0
idle time(ms)        : 0

        Michael Young



Apr 2 15:17:39 localhost kernel: ACPI: SSDT 00000000df66e4b4 002C8 (v01 PmRef
 Cpu0Ist 00003000 INTL 20050624)
Apr 2 15:17:39 localhost kernel: ACPI: SSDT 00000000df66de4a 005E5 (v01 PmRef
 Cpu0Cst 00003001 INTL 20050624)
Apr 2 15:17:39 localhost kernel: Marking TSC unstable due to TSC halts in idle Apr 2 15:17:39 localhost kernel: processor LNXCPU:00: registered as cooling_dev
ice0
Apr  2 15:17:39 localhost kernel: ------------[ cut here ]------------
Apr 2 15:17:39 localhost kernel: WARNING: at mm/page_alloc.c:1820 __alloc_pages
_nodemask+0x174/0x631()
Apr  2 15:17:39 localhost kernel: Hardware name: Inspiron 1525

Apr  2 15:17:39 localhost kernel: Modules linked in:
Apr 2 15:17:39 localhost kernel: Pid: 1, comm: swapper Not tainted 2.6.32.10-1.
2.96.xendom0.fc12.x86_64 #1
Apr  2 15:17:39 localhost kernel: Call Trace:
Apr 2 15:17:39 localhost kernel: [<ffffffff81057320>] warn_slowpath_common+0x7c
/0x94
Apr 2 15:17:39 localhost kernel: [<ffffffff8105734c>] warn_slowpath_null+0x14/0
x16
Apr 2 15:17:39 localhost kernel: [<ffffffff810dd59b>] __alloc_pages_nodemask+0x
174/0x631
Apr 2 15:17:39 localhost kernel: [<ffffffff81038005>] ? __ioremap_caller+0x294/
0x2fa
Apr 2 15:17:39 localhost kernel: [<ffffffff81105f47>] alloc_page_interleave+0x3
9/0x86
Apr 2 15:17:39 localhost kernel: [<ffffffff81106046>] alloc_pages_current+0x6c/
0x9e
Apr 2 15:17:39 localhost kernel: [<ffffffff810dc26e>] __get_free_pages+0xe/0x4b Apr 2 15:17:39 localhost kernel: [<ffffffff8110f114>] __kmalloc+0x47/0x15e Apr 2 15:17:39 localhost kernel: [<ffffffff8127dead>] ? acpi_ex_load_op+0xc2/0x
265
Apr 2 15:17:39 localhost kernel: [<ffffffff8127dd51>] acpi_os_allocate+0x2a/0x2
c
Apr 2 15:17:39 localhost kernel: [<ffffffff8127dec8>] acpi_ex_load_op+0xdd/0x26
5
Apr 2 15:17:39 localhost kernel: [<ffffffff81280945>] acpi_ex_opcode_1A_1T_0R+0
x2a/0x50
Apr 2 15:17:39 localhost kernel: [<ffffffff81277d1b>] acpi_ds_exec_end_op+0xef/
0x3dc
Apr 2 15:17:39 localhost kernel: [<ffffffff8128a49e>] acpi_ps_parse_loop+0x7c0/
0x946
Apr 2 15:17:39 localhost kernel: [<ffffffff81278611>] ? acpi_ds_call_control_me
thod+0x16b/0x1da
Apr 2 15:17:39 localhost kernel: [<ffffffff81289588>] acpi_ps_parse_aml+0x9f/0x
2de
Apr 2 15:17:39 localhost kernel: [<ffffffff8128ad2c>] acpi_ps_execute_method+0x
1e9/0x2b9
Apr 2 15:17:39 localhost kernel: [<ffffffff8128628a>] acpi_ns_evaluate+0xe6/0x1
ad
Apr 2 15:17:39 localhost kernel: [<ffffffff81285cb0>] acpi_evaluate_object+0xfe
/0x1f7
Apr 2 15:17:39 localhost kernel: [<ffffffff81026b34>] ? init_intel_pdc+0xd6/0x1
7d
Apr 2 15:17:39 localhost kernel: [<ffffffff81292983>] acpi_processor_set_pdc+0x
41/0x43
Apr 2 15:17:39 localhost kernel: [<ffffffff8145b0fb>] acpi_processor_add+0x579/
0x6a4
Apr 2 15:17:39 localhost kernel: [<ffffffff8126e72d>] acpi_device_probe+0x50/0x
122
Apr 2 15:17:39 localhost kernel: [<ffffffff812e48a2>] driver_probe_device+0xea/
0x217
Apr 2 15:17:39 localhost kernel: [<ffffffff812e4a2c>] __driver_attach+0x5d/0x81 Apr 2 15:17:39 localhost kernel: [<ffffffff812e49cf>] ? __driver_attach+0x0/0x8
1
Apr 2 15:17:39 localhost kernel: [<ffffffff812e3cb4>] bus_for_each_dev+0x53/0x8
8
Apr 2 15:17:39 localhost kernel: [<ffffffff812e4632>] driver_attach+0x1e/0x20 Apr 2 15:17:39 localhost kernel: [<ffffffff812e4272>] bus_add_driver+0xf7/0x25d Apr 2 15:17:39 localhost kernel: [<ffffffff812e4d2c>] driver_register+0x9d/0x10
e
Apr 2 15:17:39 localhost kernel: [<ffffffff81852666>] ? acpi_processor_init+0x0
/0x136
Apr 2 15:17:39 localhost kernel: [<ffffffff8126fe18>] acpi_bus_register_driver+
0x43/0x47
Apr 2 15:17:39 localhost kernel: [<ffffffff81852726>] acpi_processor_init+0xc0/
0x136
Apr 2 15:17:39 localhost kernel: [<ffffffff81852662>] ? acpi_pci_slot_init+0x1c
/0x20
Apr 2 15:17:39 localhost kernel: [<ffffffff8100a069>] do_one_initcall+0x5e/0x15
9
Apr 2 15:17:39 localhost kernel: [<ffffffff81824766>] kernel_init+0x20f/0x269
Apr  2 15:17:39 localhost kernel: [<ffffffff81013d6a>] child_rip+0xa/0x20
Apr 2 15:17:39 localhost kernel: [<ffffffff81824557>] ? kernel_init+0x0/0x269 Apr 2 15:17:39 localhost kernel: [<ffffffff81013d60>] ? child_rip+0x0/0x20
Apr  2 15:17:39 localhost kernel: ---[ end trace 5a5d197966b56a2e ]---
Apr  2 15:17:39 localhost kernel: ACPI Error (psparse-0537):
Apr  2 15:17:39 localhost kernel: Switching to clocksource hpet
Apr 2 15:17:39 localhost kernel: Method parse/execution failed [\_PR_.CPU1._OSC
] (Node ffff88011ba6e000), AE_NO_MEMORY
Apr 2 15:17:39 localhost kernel: ACPI Error (psparse-0537): Method parse/execut
ion failed [\_PR_.CPU1._PDC] (Node ffff88011ba63fe0), AE_NO_MEMORY


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.