|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] v3.10-rc0 regressions. HELP!
On Wed, May 08, 2013 at 04:26:08PM -0400, Konrad Rzeszutek Wilk wrote:
> I am not able to see these with v3.9 but with v3.10 I can easily seem them.
>
> And I can only see them when I build the kernel with these options:
>
> CONFIG_DEBUG_MUTEXES=y
> CONFIG_DEBUG_LOCK_ALLOC=y
> CONFIG_PROVE_LOCKING=y
> CONFIG_DEBUG_SPINLOCK_SLEEP=y
>
> Attached is the full serial log, but here are the excerpts:
>
> (XEN) HVM1: 130MB medium detected
> (XEN) HVM1: Booting from 0000:7c00
> [ 182.836965] BUG: scheduling while atomic: qemu-dm/3621/0x00000101
> [ 182.863930] no locks held by qemu-dm/3621.
> [ 182.888475] Modules linked in: dm_multipath dm_mod xen_evtchn
> iscsi_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi
> libcrc32c crc32c nouveau mxm_wmi radeon ttm sg sr_mod sd_mod cdrom ahci
> libahci mperf crc32c_intel libata scsi_mod fbcon tilebli xen_blkfront
> xen_netfront fb_sys_fops sysimgblt sysfillrect syscopyarea xenfs xen_privcmd
> [ 183.012005] CPU: 0 PID: 3621 Comm: qemu-dm Not tainted
> 3.9.0upstream-10936-g51a26ae #1
> [ 183.042583] Hardware name: LENOVO ThinkServer TS130/ , BIOS
> 9HKT47AUS 01/10/2012
> [ 183.073531] 0000000000000000 ffff88007fa03c38 ffffffff8169d092
> ffff88007fa03c58
> [ 183.104037] ffffffff810c23d5 ffff88007fa14b00 ffff88007fa14b00
> ffff88007fa03ce8
> [ 183.134392] ffffffff8169f16f 000000010e4341c0 ffff880012405fd8
> ffff880012404000
> [ 183.164498] Call Trace:
> [ 183.189376] <IRQ> [<ffffffff8169d092>] dump_stack+0x19/0x1b
> [ 183.217888] [<ffffffff810c23d5>] __schedule_bug+0x65/0x90
> [ 183.246280] [<ffffffff8169f16f>] __schedule+0x81f/0x840
> [ 183.274147] [<ffffffff8169f254>] schedule+0x24/0x70
> [ 183.301306] [<ffffffff8169dfb0>] schedule_hrtimeout_range_clock+0xc0/0x160
> [ 183.330515] [<ffffffff810b98f0>] ? update_rmtp+0x80/0x80
> [ 183.357663] [<ffffffff810baaff>] ? hrtimer_start_range_ns+0xf/0x20
> [ 183.385601] [<ffffffff8169e05e>] schedule_hrtimeout_range+0xe/0x10
> [ 183.413258] [<ffffffff8109e18b>] usleep_range+0x3b/0x40
> [ 183.439494] [<ffffffffa007fc6d>] e1000_irq_enable+0x1ad/0x1e0 [e1000e]
> [ 183.467222] [<ffffffffa007fe18>] e1000e_poll+0x178/0x2e0 [e1000e]
> [ 183.494288] [<ffffffff81540b78>] ? net_rx_action+0xd8/0x280
> [ 183.520433] [<ffffffff81540bd5>] net_rx_action+0x135/0x280
> [ 183.546316] [<ffffffff81096bd9>] __do_softirq+0x119/0x2d0
> [ 183.571792] [<ffffffff81096efd>] irq_exit+0xed/0x100
> [ 183.596388] [<ffffffff813b742f>] xen_evtchn_do_upcall+0x2f/0x40
> [ 183.621833] [<ffffffff816aac1e>] xen_do_hypervisor_callback+0x1e/0x30
> [ 183.647781] <EOI> [<ffffffff8100122a>] ?
> xen_hypercall_xen_version+0xa/0x20
> [ 183.674269] [<ffffffff8100122a>] ? xen_hypercall_xen_version+0xa/0x20
> [ 183.699930] [<ffffffff810420ed>] ? xen_force_evtchn_callback+0xd/0x10
> [ 183.725964] [<ffffffff81042a22>] ? check_events+0x12/0x20
> [ 183.750676] [<ffffffff810429c9>] ? xen_irq_enable_direct_rel[
> 183.776451] [<ffffffff816a970c>] ? system_call_after_swapgs+0x19/0x60
> [ 183.802194] NOHZ: local_softirq_pending 282
> [ 183.827712] sh (3751) used greatest stack depth: 2344 [ 184.035913] BUG:
> scheduling while atomic: qemu-dm/3621/0x00000101
> [ 184.035916] BUG: scheduling while atomic: sshd/3582/0x00000604
> [ 184.035918] 7 locks held by sshd/3582:
> [ 184.035924] #0: (sk_lock-AF_INET){+.+.+.}, at: [<ffffffff8159de57>]
> tcp_sendmsg[ 184.035927] #1: (rcu_read_lock){.+.+..}, at:
> [<ffffffff815916d0>] ip_queue_xmit+0x0/0x510
> [ 184.035930] #2: (rcu_read_lock_bh){.+....}, at: [<ffffffff81590ecb>]
> ip_finish_output2+0x7b/0x3e0
> [ 184.035933] #3: (r..}, at: [<ffffffff815418b0>] dev_queue_xmit+0x0/0x690
> [ 184.035937] #4: (rcu_read_lock){.+.+..}, at: [<ffffffff81649640>]
> br_dev_xmit+0x0/0x1b0
> [ 184.035939] #5: (rcu_read_lock_bh){.+....}, at: [<ffffffff815418b0>]
> dev_queue_xmit+0x0/0x690
> [ 184.035943] #6: (_xmit_ETHER#2){+.-...}, at: [<ffffffff815607b7>]
> sch_direct_xmit+0xb7/0x280
>
> And so on. It keeps on happening when QEMU runs and at some point the kernel
> crashes due to corruption:
and this patch https://lkml.org/lkml/2013/5/8/374 fixes it!
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |