WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

[Xen-users] [3.4.2 Dom0 crash] (XEN) mm.c: Error getting mfn from L1 ent

To: xen-users@xxxxxxxxxxxxxxxxxxx, Andrew Lyon <andrew.lyon@xxxxxxxxx>
Subject: [Xen-users] [3.4.2 Dom0 crash] (XEN) mm.c: Error getting mfn from L1 entry
From: Christian Fischer <Christian.Fischer@xxxxxxxxxxxxxxxxxxx>
Date: Fri, 19 Mar 2010 14:30:58 +0100
Cc:
Delivery-date: Fri, 19 Mar 2010 06:32:21 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: KMail/1.9.10
Hi all,

I've dom0 crashes running 64bit windows.

system: HP ProLiant DL 380G6 2x X5550 20GB Ram
OS: gentoo amd64
kernel: gentoo xen-sources-2.6.29-r4
Xen: 3.4.2

This is not a PAE-Build of xen, but i must turn on pae for the hvm guest to 
run, that's a bit confusing for me, I thought it is for 32bit highmem 
support.

Please can you help me to track down the problem?

Christian


brlan: port 6(vif4.0) entering forwarding state
(XEN) mm.c:806:d0 Error getting mfn a6ad (pfn 3c3c4) from L1 entry 
801000000a6adc07 for l1e_owner=0, pg_owner=0
(XEN) mm.c:4203:d0 ptwr_emulate: could not get_page_from_l1e()
BUG: unable to handle kernel paging request at ffff8804e4f94013
IP: [<ffffffff803ca88b>] swiotlb_bounce+0x35/0x3a
PGD 916067 PUD 5674067 PMD 579c067 PTE 80100004e4f94065
Oops: 0003 [#1] SMP
last sysfs file: /sys/devices/xen-backend/vbd-4-768/statistics/wr_sect
CPU 0
Pid: 0, comm: swapper Not tainted 2.6.29-xen-r400 #5 ProLiant DL380 G6
RIP: e030:[<ffffffff803ca88b>]  [<ffffffff803ca88b>] swiotlb_bounce+0x35/0x3a
RSP: e02b:ffffffff8085dec0  EFLAGS: 00010202
RAX: 0000000000001000 RBX: 0000000000000058 RCX: 0000000000000fed
RDX: 0000000000001000 RSI: ffff88001a27f013 RDI: ffff8804e4f94013
RBP: 0000000000000058 R08: ffff88001a27f000 R09: 00000004e4f94000
R10: 00001c1c00001000 R11: 0000000000000002 R12: 0000000000001000
R13: 0000000000000002 R14: ffff8804e7200000 R15: ffff8804e587ef70
FS:  00007f8431b556f0(0000) GS:ffffffff8085f040(0000) knlGS:0000000000000000
CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: ffff8804e4f94013 CR3: 00000001de97d000 CR4: 0000000000002660
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process swapper (pid: 0, threadinfo ffffffff80802000, task ffffffff8076d320)
Stack:
 0000000000001000 ffffffff803cad5a ffff8804e7fa0000 ffff8804e7200020
 0000000000000002 ffffffff80479f2a 0000000000000002 ffffffff8085df30
 0000000000000020 0000000000000001 ffffffff80856ab0 000000000000000a
Call Trace:
 <IRQ> <0> [<ffffffff803cad5a>] ? unmap_single+0x40/0xd2
 [<ffffffff80479f2a>] ? cciss_softirq_done+0x87/0x1b4
 [<ffffffff803b2034>] ? blk_done_softirq+0x9c/0xae
 [<ffffffff80230bf1>] ? __do_softirq+0xa0/0x153
 [<ffffffff8020b3ec>] ? call_softirq+0x1c/0x28
 [<ffffffff8020c907>] ? do_softirq+0x4b/0xce
 [<ffffffff8020ae5e>] ? do_hypervisor_callback+0x1e/0x30
 <EOI> <0> [<ffffffff8020d226>] ? xen_safe_halt+0xa2/0xb7
 [<ffffffff802105c3>] ? xen_idle+0x2e/0x67
 [<ffffffff80209a06>] ? cpu_idle+0x57/0x93
Code: 48 89 d0 ff c9 75 13 48 be 00 00 00 00 00 88 ff ff 48 8d 34 37 4c 89 c7 
eb 0e 48 bf 00 00 00 00 00 88 ff ff 49 8d 3c 39 48 89 c1 <f3> a4 41 58 c3 49 
89 d0 48 89 f0 48 8b 15 63 5b 4f 00 48 2b 05
RIP  [<ffffffff803ca88b>] swiotlb_bounce+0x35/0x3a
 RSP <ffffffff8085dec0>
CR2: ffff8804e4f94013
---[ end trace 0787a6f026a147e9 ]---
Kernel panic - not syncing: Fatal exception in interrupt
------------[ cut here ]------------
WARNING: at kernel/smp.c:329 smp_call_function_many+0x4c/0x1f3()
Hardware name: ProLiant DL380 G6
Pid: 0, comm: swapper Tainted: G      D    2.6.29-xen-r400 #5
Call Trace:
 <IRQ>  [<ffffffff8022b854>] warn_slowpath+0xd3/0x10d
 [<ffffffff8022bfbd>] release_console_sem+0x1e9/0x21b
 [<ffffffff8022c49f>] vprintk+0x2be/0x319
 [<ffffffff8063498d>] printk+0x4e/0x56
 [<ffffffff8022bfbd>] release_console_sem+0x1e9/0x21b
 [<ffffffff8024834f>] smp_call_function_many+0x4c/0x1f3
 [<ffffffff80210558>] stop_this_cpu+0x0/0x3d
 [<ffffffff80248516>] smp_call_function+0x20/0x25
 [<ffffffff80215249>] xen_smp_send_stop+0x11/0x7b
 [<ffffffff80634884>] panic+0x8d/0x148
 [<ffffffff80241180>] up+0xe/0x36
 [<ffffffff8022bfbd>] release_console_sem+0x1e9/0x21b
 [<ffffffff8020e78d>] oops_end+0xc2/0xcf
 [<ffffffff80218283>] do_page_fault+0xed7/0xf6c
 [<ffffffff802251a4>] enqueue_task_fair+0xb1/0x148
 [<ffffffff803baaaa>] cpumask_next_and+0x2a/0x3a
 [<ffffffff80223bba>] find_busiest_group+0x302/0x6ac
 [<ffffffff804b853f>] evtchn_get_xen_pirq+0x46/0x66
 [<ffffffff804b8639>] pirq_unmask_and_notify+0xda/0xe4
 [<ffffffff80637678>] page_fault+0x28/0x30
 [<ffffffff803ca88b>] swiotlb_bounce+0x35/0x3a
 [<ffffffff803cad5a>] unmap_single+0x40/0xd2
 [<ffffffff80479f2a>] cciss_softirq_done+0x87/0x1b4
 [<ffffffff803b2034>] blk_done_softirq+0x9c/0xae
 [<ffffffff80230bf1>] __do_softirq+0xa0/0x153
 [<ffffffff8020b3ec>] call_softirq+0x1c/0x28
 [<ffffffff8020c907>] do_softirq+0x4b/0xce
 [<ffffffff8020ae5e>] do_hypervisor_callback+0x1e/0x30
 <EOI>  [<ffffffff8020d226>] xen_safe_halt+0xa2/0xb7
 [<ffffffff802105c3>] xen_idle+0x2e/0x67
 [<ffffffff80209a06>] cpu_idle+0x57/0x93
---[ end trace 0787a6f026a147ea ]---
------------[ cut here ]------------
WARNING: at kernel/smp.c:226 smp_call_function_single+0x48/0x15c()
Hardware name: ProLiant DL380 G6
Pid: 0, comm: swapper Tainted: G      D W  2.6.29-xen-r400 #5
Call Trace:
 <IRQ>  [<ffffffff8022b854>] warn_slowpath+0xd3/0x10d
 [<ffffffff8022bfbd>] release_console_sem+0x1e9/0x21b
 [<ffffffff8022c49f>] vprintk+0x2be/0x319
 [<ffffffff8063498d>] printk+0x4e/0x56
 [<ffffffff8022bfbd>] release_console_sem+0x1e9/0x21b
 [<ffffffff802481ef>] smp_call_function_single+0x48/0x15c
 [<ffffffff8024839a>] smp_call_function_many+0x97/0x1f3
 [<ffffffff80210558>] stop_this_cpu+0x0/0x3d
 [<ffffffff80248516>] smp_call_function+0x20/0x25
 [<ffffffff80215249>] xen_smp_send_stop+0x11/0x7b
 [<ffffffff80634884>] panic+0x8d/0x148
 [<ffffffff80241180>] up+0xe/0x36
 [<ffffffff8022bfbd>] release_console_sem+0x1e9/0x21b
 [<ffffffff8020e78d>] oops_end+0xc2/0xcf
 [<ffffffff80218283>] do_page_fault+0xed7/0xf6c
 [<ffffffff802251a4>] enqueue_task_fair+0xb1/0x148
 [<ffffffff803baaaa>] cpumask_next_and+0x2a/0x3a
 [<ffffffff80223bba>] find_busiest_group+0x302/0x6ac
 [<ffffffff804b853f>] evtchn_get_xen_pirq+0x46/0x66
 [<ffffffff804b8639>] pirq_unmask_and_notify+0xda/0xe4
 [<ffffffff80637678>] page_fault+0x28/0x30
 [<ffffffff803ca88b>] swiotlb_bounce+0x35/0x3a
 [<ffffffff803cad5a>] unmap_single+0x40/0xd2
 [<ffffffff80479f2a>] cciss_softirq_done+0x87/0x1b4
 [<ffffffff803b2034>] blk_done_softirq+0x9c/0xae
 [<ffffffff80230bf1>] __do_softirq+0xa0/0x153
 [<ffffffff8020b3ec>] call_softirq+0x1c/0x28
 [<ffffffff8020c907>] do_softirq+0x4b/0xce
 [<ffffffff8020ae5e>] do_hypervisor_callback+0x1e/0x30
 <EOI>  [<ffffffff8020d226>] xen_safe_halt+0xa2/0xb7
 [<ffffffff802105c3>] xen_idle+0x2e/0x67
 [<ffffffff80209a06>] cpu_idle+0x57/0x93
---[ end trace 0787a6f026a147eb ]---




_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>
  • [Xen-users] [3.4.2 Dom0 crash] (XEN) mm.c: Error getting mfn from L1 entry, Christian Fischer <=