|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] xen-unstable pvops 2.6.32.21 kernel/lockdep.c:2323 trace
On 09/16/2010 04:06 PM, Bruce Edge wrote:
> On Thu, Sep 16, 2010 at 3:27 PM, Jeremy Fitzhardinge <jeremy@xxxxxxxx> wrote:
>> On 09/15/2010 04:37 PM, Bruce Edge wrote:
>>> With top of xen-unstable and pvops 2.6.32.x I get this on the dom0
>>> serial console when I start a linux pvops domU (using the same kernel
>>> as the dom0):
>> Do you know what the actual git revision is?
> b297cdac0373625d3cd0e6f2b393570dcf2edba6. It's a current 2.6.32.x tree.
>
>> Change 04cc1e6a6a85c4 fixes this problem.
> I checked the source against that checkin and it's definitely applied
> to the kernel, both dom0 and domU (I have the same kernel for both).
That's odd.
> It only happens the first time on each dom0 boot. After the first domU
> is created, it's not reported again.
I think the first report will disable the lockdep machinery from then on
because it can no longer accurately report problems (at least potentially).
> Here's another occurrence on a completely different HW platform than
> the first one I reported. The original was on a HP Proliant with 2
> CPU, this is on a SuperMicro with 1 CPU.
Yeah, it shouldn't be hardware dependent at all.
J
> [ 88.876635] WARNING: at kernel/lockdep.c:2323
> trace_hardirqs_on_caller+0x12f/0x190()
> [ 88.876669] Hardware name: X8ST3
> [ 88.876716] Modules linked in: xt_physdev ipmi_msghandler ipv6
> osa_mfgdom0 xenfs xen_gntdev xen_evtchn xen_pciback tun bridge stp llc
> serio_raw lp joydev ppdev ioatdma parport_pc dca parport usb_storage
> e1000e usbhid hid
> [ 88.876930] Pid: 11, comm: xenwatch Not tainted 2.6.32.21-1 #1
> [ 88.876972] Call Trace:
> [ 88.876985] <IRQ> [<ffffffff810aa22f>] ?
> trace_hardirqs_on_caller+0x12f/0x190
> [ 88.877027] [<ffffffff8106bf70>] warn_slowpath_common+0x80/0xd0
> [ 88.877059] [<ffffffff815f3f50>] ? _spin_unlock_irq+0x30/0x40
> [ 88.877090] [<ffffffff8106bfd4>] warn_slowpath_null+0x14/0x20
> [ 88.877120] [<ffffffff810aa22f>] trace_hardirqs_on_caller+0x12f/0x190
> [ 88.877151] [<ffffffff810aa29d>] trace_hardirqs_on+0xd/0x10
> [ 88.877181] [<ffffffff815f3f50>] _spin_unlock_irq+0x30/0x40
> [ 88.877212] [<ffffffff813c5055>] add_to_net_schedule_list_tail+0x85/0xd0
> [ 88.877243] [<ffffffff813c62a6>] netif_be_int+0x36/0x160
> [ 88.877269] [<ffffffff810e1150>] handle_IRQ_event+0x70/0x180
> [ 88.877300] [<ffffffff810e3769>] handle_edge_irq+0xc9/0x170
> [ 88.877330] [<ffffffff813b8dff>] __xen_evtchn_do_upcall+0x1bf/0x1f0
> [ 88.877361] [<ffffffff813b937d>] xen_evtchn_do_upcall+0x3d/0x60
> [ 88.877393] [<ffffffff8101647e>] xen_do_hypervisor_callback+0x1e/0x30
> [ 88.877422] <EOI> [<ffffffff8100940a>] ? hypercall_page+0x40a/0x1010
> [ 88.877463] [<ffffffff8100940a>] ? hypercall_page+0x40a/0x1010
> [ 88.877500] [<ffffffff813bcee4>] ? xb_write+0x1e4/0x290
> [ 88.877538] [<ffffffff813bd95a>] ? xs_talkv+0x6a/0x1f0
> [ 88.877575] [<ffffffff813bd968>] ? xs_talkv+0x78/0x1f0
> [ 88.877604] [<ffffffff813bdc5d>] ? xs_single+0x4d/0x60
> [ 88.877629] [<ffffffff813be592>] ? xenbus_read+0x52/0x80
> [ 88.877655] [<ffffffff813c888c>] ? frontend_changed+0x48c/0x770
> [ 88.877686] [<ffffffff813bf7fd>] ? xenbus_otherend_changed+0xdd/0x1b0
> [ 88.877717] [<ffffffff810111ef>] ? xen_restore_fl_direct_end+0x0/0x1
> [ 88.877748] [<ffffffff810ac8c0>] ? lock_release+0xb0/0x230
> [ 88.877774] [<ffffffff813bfb70>] ? frontend_changed+0x10/0x20
> [ 88.877804] [<ffffffff813bd585>] ? xenwatch_thread+0x55/0x160
> [ 88.877836] [<ffffffff810934a0>] ? autoremove_wake_function+0x0/0x40
> [ 88.877876] [<ffffffff813bd530>] ? xenwatch_thread+0x0/0x160
> [ 88.877922] [<ffffffff81093126>] ? kthread+0x96/0xb0
> [ 88.877960] [<ffffffff8101632a>] ? child_rip+0xa/0x20
> [ 88.877988] [<ffffffff81015c90>] ? restore_args+0x0/0x30
> [ 88.878043] [<ffffffff81016320>] ? child_rip+0x0/0x20
> [ 88.878067] ---[ end trace 159d41648bcc43b4 ]---
> [ 95.254217] vif1.0: no IPv6 routers present
>
>
>
> -Bruce
>
>> J
>>
>>> 0 kaan-18 ~ #> (XEN) tmem: all pools frozen for all domains
>>> (XEN) tmem: all pools thawed for all domains
>>> (XEN) tmem: all pools frozen for all domains
>>> (XEN) tmem: all pools thawed for all domains
>>>
>>> mapping kernel into physical memory
>>> about to get started...
>>> [ 188.652747] ------------[ cut here ]------------
>>> [ 188.652800] WARNING: at kernel/lockdep.c:2323
>>> trace_hardirqs_on_caller+0x12f/0x190()
>>> [ 188.652826] Hardware name: ProLiant DL380 G6
>>> [ 188.652844] Modules linked in: xt_physdev osa_mfgdom0 xenfs
>>> xen_gntdev xen_evtchn ipv6 fbcon tileblit font bitblit softcursor
>>> xen_pciback radeon tun ttm drm_kms_helper serio_raw ipmi_si bridge drm
>>> i2c_algo_bit stp ipmi_msghandler joydev hpilo hpwdt i2c_core llc lp
>>> parport usbhid hid cciss usb_storage
>>> [ 188.653063] Pid: 11, comm: xenwatch Not tainted 2.6.32.21-1 #1
>>> [ 188.653084] Call Trace:
>>> [ 188.653094] <IRQ> [<ffffffff810aa22f>] ?
>>> trace_hardirqs_on_caller+0x12f/0x190
>>> [ 188.653127] [<ffffffff8106bf70>] warn_slowpath_common+0x80/0xd0
>>> [ 188.653153] [<ffffffff815f3f50>] ? _spin_unlock_irq+0x30/0x40
>>> [ 188.653177] [<ffffffff8106bfd4>] warn_slowpath_null+0x14/0x20
>>> [ 188.653200] [<ffffffff810aa22f>] trace_hardirqs_on_caller+0x12f/0x190
>>> [ 188.653224] [<ffffffff810aa29d>] trace_hardirqs_on+0xd/0x10
>>> [ 188.653246] [<ffffffff815f3f50>] _spin_unlock_irq+0x30/0x40
>>> [ 188.653270] [<ffffffff813c5055>] add_to_net_schedule_list_tail+0x85/0xd0
>>> [ 188.653294] [<ffffffff813c62a6>] netif_be_int+0x36/0x160
>>> [ 188.653314] [<ffffffff810e1150>] handle_IRQ_event+0x70/0x180
>>> [ 188.653338] [<ffffffff810e3769>] handle_edge_irq+0xc9/0x170
>>> [ 188.653362] [<ffffffff813b8dff>] __xen_evtchn_do_upcall+0x1bf/0x1f0
>>> [ 188.653385] [<ffffffff813b937d>] xen_evtchn_do_upcall+0x3d/0x60
>>> [ 188.653409] [<ffffffff8101647e>] xen_do_hypervisor_callback+0x1e/0x30
>>> [ 188.653431] <EOI> [<ffffffff8100940a>] ? hypercall_page+0x40a/0x1010
>>> [ 188.653464] [<ffffffff8100940a>] ? hypercall_page+0x40a/0x1010
>>> [ 188.653487] [<ffffffff813bcee4>] ? xb_write+0x1e4/0x290
>>> [ 188.653507] [<ffffffff813bd95a>] ? xs_talkv+0x6a/0x1f0
>>> [ 188.653526] [<ffffffff813bd968>] ? xs_talkv+0x78/0x1f0
>>> [ 188.653546] [<ffffffff813bdc5d>] ? xs_single+0x4d/0x60
>>> [ 188.653565] [<ffffffff813be592>] ? xenbus_read+0x52/0x80
>>> [ 188.653585] [<ffffffff813c888c>] ? frontend_changed+0x48c/0x770
>>> [ 188.653609] [<ffffffff813bf7fd>] ? xenbus_otherend_changed+0xdd/0x1b0
>>> [ 188.653633] [<ffffffff810111ef>] ? xen_restore_fl_direct_end+0x0/0x1
>>> [ 188.653656] [<ffffffff810ac8c0>] ? lock_release+0xb0/0x230
>>> [ 188.653676] [<ffffffff813bfb70>] ? frontend_changed+0x10/0x20
>>> [ 188.653699] [<ffffffff813bd585>] ? xenwatch_thread+0x55/0x160
>>> [ 188.653723] [<ffffffff810934a0>] ? autoremove_wake_function+0x0/0x40
>>> [ 188.653747] [<ffffffff813bd530>] ? xenwatch_thread+0x0/0x160
>>> [ 188.653770] [<ffffffff81093126>] ? kthread+0x96/0xb0
>>> [ 188.653790] [<ffffffff8101632a>] ? child_rip+0xa/0x20
>>> [ 188.653809] [<ffffffff81015c90>] ? restore_args+0x0/0x30
>>> [ 188.653828] [<ffffffff81016320>] ? child_rip+0x0/0x20
>>> [ 188.653846] ---[ end trace ab2eaae7afa5acdb ]---
>>> [ 195.184915] vif1.0: no IPv6 routers present
>>>
>>> The domU does start and is functional, complete with PCI passthrough.
>>>
>>> kern.log repeats the same information, but precedes it with this:
>>>
>>>
>>> ==> kern.log <==
>>> 2010-09-15T16:29:40.921979-07:00 kaan-18 [ 188.562552] blkback:
>>> ring-ref 8, event-channel 73, protocol 1 (x86_64-abi)
>>> 2010-09-15T16:29:40.922202-07:00 kaan-18 [ 188.562647] alloc
>>> irq_desc for 481 on node 0
>>> 2010-09-15T16:29:40.922310-07:00 kaan-18 [ 188.562652] alloc
>>> kstat_irqs on node 0
>>> 2010-09-15T16:29:40.971943-07:00 kaan-18 [ 188.608807] alloc
>>> irq_desc for 480 on node 0
>>> 2010-09-15T16:29:40.972387-07:00 kaan-18 [ 188.608813] alloc
>>> kstat_irqs on node 0
>>> 2010-09-15T16:29:41.013027-07:00 kaan-18 [ 188.652651] alloc
>>> irq_desc for 479 on node 0
>>> 2010-09-15T16:29:41.013286-07:00 kaan-18 [ 188.652657] alloc
>>> kstat_irqs on node 0
>>>
>>> Let me know if there's anything else I can provide.
>>>
>>> -Bruce
>>>
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>>> http://lists.xensource.com/xen-devel
>>>
>>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|