WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] xen-unstable pvops 2.6.32.21 kernel/lockdep.c:2323 trace

To: Bruce Edge <bruce.edge@xxxxxxxxx>
Subject: Re: [Xen-devel] xen-unstable pvops 2.6.32.21 kernel/lockdep.c:2323 trace_hardirqs_on_caller+0x12f/0x190()
From: Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Date: Thu, 16 Sep 2010 15:27:34 -0700
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Thu, 16 Sep 2010 15:28:23 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <AANLkTikRHHKppqy-DfM5TJWTSuX4yAAU=nO1FeMMKyNc@xxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <AANLkTikRHHKppqy-DfM5TJWTSuX4yAAU=nO1FeMMKyNc@xxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.9) Gecko/20100907 Fedora/3.1.3-1.fc13 Lightning/1.0b3pre Thunderbird/3.1.3
 On 09/15/2010 04:37 PM, Bruce Edge wrote:
> With top of xen-unstable and pvops 2.6.32.x I get this on the dom0
> serial console when I start a linux pvops domU (using the same kernel
> as the dom0):

Do you know what the actual git revision is?  Change 04cc1e6a6a85c4
fixes this problem.

    J

> 0 kaan-18  ~ #> (XEN) tmem: all pools frozen for all domains
> (XEN) tmem: all pools thawed for all domains
> (XEN) tmem: all pools frozen for all domains
> (XEN) tmem: all pools thawed for all domains
>
> mapping kernel into physical memory
> about to get started...
> [  188.652747] ------------[ cut here ]------------
> [  188.652800] WARNING: at kernel/lockdep.c:2323
> trace_hardirqs_on_caller+0x12f/0x190()
> [  188.652826] Hardware name: ProLiant DL380 G6
> [  188.652844] Modules linked in: xt_physdev osa_mfgdom0 xenfs
> xen_gntdev xen_evtchn ipv6 fbcon tileblit font bitblit softcursor
> xen_pciback radeon tun ttm drm_kms_helper serio_raw ipmi_si bridge drm
> i2c_algo_bit stp ipmi_msghandler joydev hpilo hpwdt i2c_core llc lp
> parport usbhid hid cciss usb_storage
> [  188.653063] Pid: 11, comm: xenwatch Not tainted 2.6.32.21-1 #1
> [  188.653084] Call Trace:
> [  188.653094]  <IRQ>  [<ffffffff810aa22f>] ?
> trace_hardirqs_on_caller+0x12f/0x190
> [  188.653127]  [<ffffffff8106bf70>] warn_slowpath_common+0x80/0xd0
> [  188.653153]  [<ffffffff815f3f50>] ? _spin_unlock_irq+0x30/0x40
> [  188.653177]  [<ffffffff8106bfd4>] warn_slowpath_null+0x14/0x20
> [  188.653200]  [<ffffffff810aa22f>] trace_hardirqs_on_caller+0x12f/0x190
> [  188.653224]  [<ffffffff810aa29d>] trace_hardirqs_on+0xd/0x10
> [  188.653246]  [<ffffffff815f3f50>] _spin_unlock_irq+0x30/0x40
> [  188.653270]  [<ffffffff813c5055>] add_to_net_schedule_list_tail+0x85/0xd0
> [  188.653294]  [<ffffffff813c62a6>] netif_be_int+0x36/0x160
> [  188.653314]  [<ffffffff810e1150>] handle_IRQ_event+0x70/0x180
> [  188.653338]  [<ffffffff810e3769>] handle_edge_irq+0xc9/0x170
> [  188.653362]  [<ffffffff813b8dff>] __xen_evtchn_do_upcall+0x1bf/0x1f0
> [  188.653385]  [<ffffffff813b937d>] xen_evtchn_do_upcall+0x3d/0x60
> [  188.653409]  [<ffffffff8101647e>] xen_do_hypervisor_callback+0x1e/0x30
> [  188.653431]  <EOI>  [<ffffffff8100940a>] ? hypercall_page+0x40a/0x1010
> [  188.653464]  [<ffffffff8100940a>] ? hypercall_page+0x40a/0x1010
> [  188.653487]  [<ffffffff813bcee4>] ? xb_write+0x1e4/0x290
> [  188.653507]  [<ffffffff813bd95a>] ? xs_talkv+0x6a/0x1f0
> [  188.653526]  [<ffffffff813bd968>] ? xs_talkv+0x78/0x1f0
> [  188.653546]  [<ffffffff813bdc5d>] ? xs_single+0x4d/0x60
> [  188.653565]  [<ffffffff813be592>] ? xenbus_read+0x52/0x80
> [  188.653585]  [<ffffffff813c888c>] ? frontend_changed+0x48c/0x770
> [  188.653609]  [<ffffffff813bf7fd>] ? xenbus_otherend_changed+0xdd/0x1b0
> [  188.653633]  [<ffffffff810111ef>] ? xen_restore_fl_direct_end+0x0/0x1
> [  188.653656]  [<ffffffff810ac8c0>] ? lock_release+0xb0/0x230
> [  188.653676]  [<ffffffff813bfb70>] ? frontend_changed+0x10/0x20
> [  188.653699]  [<ffffffff813bd585>] ? xenwatch_thread+0x55/0x160
> [  188.653723]  [<ffffffff810934a0>] ? autoremove_wake_function+0x0/0x40
> [  188.653747]  [<ffffffff813bd530>] ? xenwatch_thread+0x0/0x160
> [  188.653770]  [<ffffffff81093126>] ? kthread+0x96/0xb0
> [  188.653790]  [<ffffffff8101632a>] ? child_rip+0xa/0x20
> [  188.653809]  [<ffffffff81015c90>] ? restore_args+0x0/0x30
> [  188.653828]  [<ffffffff81016320>] ? child_rip+0x0/0x20
> [  188.653846] ---[ end trace ab2eaae7afa5acdb ]---
> [  195.184915] vif1.0: no IPv6 routers present
>
> The domU does start and is functional, complete with PCI passthrough.
>
> kern.log repeats the same information, but precedes it with this:
>
>
> ==> kern.log <==
> 2010-09-15T16:29:40.921979-07:00 kaan-18 [  188.562552] blkback:
> ring-ref 8, event-channel 73, protocol 1 (x86_64-abi)
> 2010-09-15T16:29:40.922202-07:00 kaan-18 [  188.562647]   alloc
> irq_desc for 481 on node 0
> 2010-09-15T16:29:40.922310-07:00 kaan-18 [  188.562652]   alloc
> kstat_irqs on node 0
> 2010-09-15T16:29:40.971943-07:00 kaan-18 [  188.608807]   alloc
> irq_desc for 480 on node 0
> 2010-09-15T16:29:40.972387-07:00 kaan-18 [  188.608813]   alloc
> kstat_irqs on node 0
> 2010-09-15T16:29:41.013027-07:00 kaan-18 [  188.652651]   alloc
> irq_desc for 479 on node 0
> 2010-09-15T16:29:41.013286-07:00 kaan-18 [  188.652657]   alloc
> kstat_irqs on node 0
>
> Let me know if there's anything else I can provide.
>
> -Bruce
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
>


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel