[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] BUG - last sysfs file: /sys/devices/vif-0/net/eth0/broadcast


  • To: "Xen-Devel (E-mail)" <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: Nathan Stratton <nathan@xxxxxxxxxxxx>
  • Date: Tue, 11 Aug 2009 18:15:56 -0500 (CDT)
  • Delivery-date: Tue, 11 Aug 2009 16:16:24 -0700
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>


DomU stock Fedora 11 (where we see the crash) running our video conferencing application using about 8 - 12 mbps of I/O.

Have tried two dom0s:
Centos 5.3 -- 2.6.18-128.2.1.el5xen -- Xen 3.4
Fedora 11 -- 2.6.41-rc5 -- Xen 3.4.1

BUG: unable to handle kernel paging request at 0000000000100100
IP: [<ffffffff810a4931>] get_page_from_freelist+0x2f9/0x673
PGD 6c85b067 PUD 6e063067 PMD 0
Oops: 0000 [#1] SMP
last sysfs file: /sys/devices/vif-0/net/eth0/broadcast
CPU 3
Modules linked in: ipv6 pcspkr xen_netfront joydev xen_blkfront [last unloaded: scsi_wait_scan] Pid: 14784, comm: texgrid Tainted: G W 2.6.29.6-217.2.3.fc11.x86_64 #1 RIP: e030:[<ffffffff810a4931>] [<ffffffff810a4931>] get_page_from_freelist+0x2f9/0x673
RSP: e02b:ffff88006c4d1b68  EFLAGS: 00010093
RAX: ffff88006e84e890 RBX: ffff88006e84e880 RCX: ffffe20001205b50
RDX: 0000000000100100 RSI: 0000000000000002 RDI: 0000000000000002
RBP: ffff88006c4d1c48 R08: 0000000000000000 R09: 0000000000000001
R10: 0000000000000681 R11: 0000000000044fb1 R12: 00000000001000d8
R13: ffff880000008880 R14: 0000000000000002 R15: 0000000000000000
FS: 00007fe7f9c62910(0000) GS:ffff88006e84ea00(0000) knlGS:0000000000000000
CS:  e033 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000100100 CR3: 000000006dd4d000 CR4: 0000000000002660
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process texgrid (pid: 14784, threadinfo ffff88006c4d0000, task ffff88006d6fc500)
Stack:
 00007fe7f5c62000 800088006c4d1c38 0000000100000040 0000000000000001
 000000006c4d1ba8 ffff880000008880 0000000000000002 ffff880000008ad0
 00000000f5c61fff ffffffff81780860 000000036d411d40 0000000000000000
Call Trace:
 [<ffffffff810a4d9d>] __alloc_pages_internal+0xf2/0x429
 [<ffffffff8100e619>] ? __spin_time_accum+0x21/0x37
 [<ffffffff810c7959>] alloc_page_vma+0xd1/0xd3
 [<ffffffff810bd36a>] ? anon_vma_prepare+0x2d/0xd7
 [<ffffffff810b43c8>] handle_mm_fault+0x1a0/0x7c5
 [<ffffffff813af045>] do_page_fault+0x5b5/0x9e9
 [<ffffffff81043b7c>] ? update_curr_rt+0x186/0x1a9
 [<ffffffff8103ac88>] ? pick_next_task+0x39/0x49
 [<ffffffff8100be86>] ? xen_mc_flush+0x191/0x1e8
 [<ffffffff8100ac7d>] ? xen_mc_issue+0x47/0x67
 [<ffffffff8100aec1>] ? xen_clts+0x3d/0x3f
 [<ffffffff8100e5dc>] ? xen_spin_unlock+0x11/0x2d
 [<ffffffff813ac4dd>] ? trace_hardirqs_off_thunk+0x3a/0x6c
 [<ffffffff813acba5>] page_fault+0x25/0x30
Code: 49 8d 4c 24 28 48 39 c8 0f 18 0a 75 dc eb 2e 4c 8b 63 10 49 83 ec 28 eb 12 48 8b b5 50 ff ff ff 49 39 74 24 10 74 16 4c 8d 62 d8 <49> 8b 54 24 28 49 8d 4c 24 28 48 39 c8 0f 18 0a 75 dc 49 8d 54
RIP  [<ffffffff810a4931>] get_page_from_freelist+0x2f9/0x673
 RSP <ffff88006c4d1b68>
CR2: 0000000000100100
---[ end trace d94e87927507a16f ]---

Message from syslogd@cbridge1 at Aug 11 17:58:17 ...
 kernel:Oops: 0000 [#1] SMP

Message from syslogd@cbridge1 at Aug 11 17:58:17 ...
 kernel:last sysfs file: /sys/devices/vif-0/net/eth0/broadcast

Message from syslogd@cbridge1 at Aug 11 17:58:17 ...
 kernel:Stack:

Message from syslogd@cbridge1 at Aug 11 17:58:17 ...
kernel: 000000006c4d1ba8 ffff880000008880 0000000000000002 ffff880000008ad0

Message from syslogd@cbridge1 at Aug 11 17:58:17 ...
kernel: 00000000f5c61fff ffffffff81780860 000000036d411d40 0000000000000000

Message from syslogd@cbridge1 at Aug 11 17:58:17 ...
 kernel:Call Trace:

Message from syslogd@cbridge1 at Aug 11 17:58:17 ...
 kernel: [<ffffffff810a4d9d>] __alloc_pages_internal+0xf2/0x429

Message from syslogd@cbridge1 at Aug 11 17:58:17 ...
 kernel: [<ffffffff8100e619>] ? __spin_time_accum+0x21/0x37

Message from syslogd@cbridge1 at Aug 11 17:58:17 ...
 kernel: [<ffffffff810c7959>] alloc_page_vma+0xd1/0xd3

Message from syslogd@cbridge1 at Aug 11 17:58:17 ...
 kernel: [<ffffffff810bd36a>] ? anon_vma_prepare+0x2d/0xd7

Message from syslogd@cbridge1 at Aug 11 17:58:17 ...
 kernel: [<ffffffff810b43c8>] handle_mm_fault+0x1a0/0x7c5

Message from syslogd@cbridge1 at Aug 11 17:58:17 ...
 kernel: [<ffffffff813af045>] do_page_fault+0x5b5/0x9e9

Message from syslogd@cbridge1 at Aug 11 17:58:17 ...
 kernel: [<ffffffff81043b7c>] ? update_curr_rt+0x186/0x1a9

Message from syslogd@cbridge1 at Aug 11 17:58:17 ...
 kernel: [<ffffffff8103ac88>] ? pick_next_task+0x39/0x49

Message from syslogd@cbridge1 at Aug 11 17:58:17 ...
 kernel: [<ffffffff8100be86>] ? xen_mc_flush+0x191/0x1e8

Message from syslogd@cbridge1 at Aug 11 17:58:17 ...
 kernel: [<ffffffff8100ac7d>] ? xen_mc_issue+0x47/0x67

Message from syslogd@cbridge1 at Aug 11 17:58:17 ...
 kernel: [<ffffffff8100aec1>] ? xen_clts+0x3d/0x3f

Message from syslogd@cbridge1 at Aug 11 17:58:17 ...
 kernel: [<ffffffff8100e5dc>] ? xen_spin_unlock+0x11/0x2d

Message from syslogd@cbridge1 at Aug 11 17:58:17 ...
 kernel: [<ffffffff813ac4dd>] ? trace_hardirqs_off_thunk+0x3a/0x6c

Message from syslogd@cbridge1 at Aug 11 17:58:17 ...
 kernel: [<ffffffff813acba5>] page_fault+0x25/0x30

Message from syslogd@cbridge1 at Aug 11 17:58:17 ...
kernel:Code: 49 8d 4c 24 28 48 39 c8 0f 18 0a 75 dc eb 2e 4c 8b 63 10 49 83 ec 28 eb 12 48 8b b5 50 ff ff ff 49 39 74 24 10 74 16 4c 8d 62 d8 <49> 8b 54 24 28 49 8d 4c 24 28 48 39 c8 0f 18 0a 75 dc 49 8d 54

Message from syslogd@cbridge1 at Aug 11 17:58:17 ...
 kernel:CR2: 0000000000100100



<>
Nathan Stratton                                CTO, BlinkMind, Inc.
nathan at robotics.net                         nathan at blinkmind.com
http://www.robotics.net                        http://www.blinkmind.com

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.