[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: 2.6.28 domU panics in blk_invoke_request_fn()



Christopher S. Aker wrote:
Below are two similar panics we've captured thus far with some light testing of 2.6.28 domUs across our Xen cluster. The kernel binary and associated files can be found here:

How frequently does this occur?

Did it just started appearing on 2.6.28, but .27.x is OK? Does it happen often enough that you could practically bisect?

Could you compile the kernel with CONFIG_BUG_VERBOSE enabled so that it at prints some source/line info? The location of the BUG isn't obvious.

What's your dom0?

   J

http://theshore.net/~caker/xen/BUGS/2.6.28/

------------[ cut here ]------------
Kernel BUG at c03eed00 [verbose debug info unavailable]
invalid opcode: 0000 [#1] SMP
last sysfs file:
Modules linked in:

Pid: 0, comm: swapper Not tainted (2.6.28-linode15 #1)
EIP: 0061:[<c03eed00>] EFLAGS: 00010046 CPU: 0
EIP is at do_blkif_request+0x2e0/0x360
EAX: 00000001 EBX: 00000000 ECX: d51650c0 EDX: d529c3f0
ESI: d5984288 EDI: d59842c8 EBP: 000003b8 ESP: c06cde20
 DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0069
Process swapper (pid: 0, ti=c06cc000 task=c0671340 task.ti=c06cc000)
Stack:
 00000005 d5984288 00000288 d5874ae0 d5940000 d53bf3c4 00000000 00000001
 d5940000 00000002 00000006 d5984000 00000000 d51650c0 d529c3cc ffffffff
 d5874ae0 d5940000 00000003 00000001 c03a7165 d5940000 c03eed96 00000000
Call Trace:
 [<c03a7165>] blk_invoke_request_fn+0x95/0x100
 [<c03eed96>] kick_pending_request_queues+0x16/0x30
 [<c03eef3d>] blkif_interrupt+0x18d/0x1d0
 [<c0159500>] handle_IRQ_event+0x30/0x60
 [<c015b418>] handle_level_irq+0x78/0xf0
 [<c010ab17>] do_IRQ+0x77/0x90
 [<c0105d0a>] check_events+0x8/0xe
 [<c03c8e08>] xen_evtchn_do_upcall+0xe8/0x150
 [<c01091c7>] xen_do_upcall+0x7/0xc
 [<c01013a7>] _stext+0x3a7/0x1000
 [<c010547f>] xen_safe_halt+0xf/0x20
 [<c0103c50>] xen_idle+0x20/0x40
 [<c0106d2f>] cpu_idle+0x5f/0xb0
Code: 2c 8d 54 03 40 8d 44 0e 54 b9 6c 00 00 00 e8 68 a5 fc ff 8b 44 24 3c e8 cf 92 fd ff 83 44 24 18 01 e9 40 fd ff ff 0f 0b eb fe 90 <0f> 0b eb fe 8b 44 24 20 ba 10 ea 3e c0 8b 4c 24 20
c7 04 24 0b
EIP: [<c03eed00>] do_blkif_request+0x2e0/0x360 SS:ESP 0069:c06cde20
Kernel panic - not syncing: Fatal exception in interrupt
------------[ cut here ]------------
WARNING: at kernel/smp.c:333 smp_call_function_mask+0x1cb/0x1d0()
Modules linked in:
Pid: 0, comm: swapper Tainted: G      D    2.6.28-linode15 #1
Call Trace:
 [<c0128aaf>] warn_on_slowpath+0x5f/0x90
 [<c03b92c6>] memmove+0x36/0x40
 [<c03dd10a>] scrup+0x7a/0xe0
 [<c0140967>] atomic_notifier_call_chain+0x17/0x20
 [<c03dd18f>] notify_update+0x1f/0x30
 [<c03dd41a>] vt_console_print+0x20a/0x2d0
 [<c0105427>] xen_force_evtchn_callback+0x17/0x30
 [<c0105d0a>] check_events+0x8/0xe
 [<c0105c73>] xen_restore_fl_direct_end+0x0/0x1
 [<c0105427>] xen_force_evtchn_callback+0x17/0x30
 [<c0105d0a>] check_events+0x8/0xe
 [<c0105c73>] xen_restore_fl_direct_end+0x0/0x1
 [<c01295b0>] vprintk+0x170/0x350
 [<c014a45b>] smp_call_function_mask+0x1cb/0x1d0
 [<c0106000>] stop_self+0x0/0x30
 [<c0105427>] xen_force_evtchn_callback+0x17/0x30
 [<c0105d0a>] check_events+0x8/0xe
 [<c0105c73>] xen_restore_fl_direct_end+0x0/0x1
 [<c05621b3>] _spin_unlock_irqrestore+0x13/0x20
 [<c03df146>] do_unblank_screen+0x16/0x130
 [<c014a474>] smp_call_function+0x14/0x20
 [<c0128b3e>] panic+0x4e/0x100
 [<c010ac6c>] oops_end+0x8c/0xa0
 [<c0109b80>] do_invalid_op+0x0/0xa0
 [<c0109bff>] do_invalid_op+0x7f/0xa0
 [<c03eed00>] do_blkif_request+0x2e0/0x360
 [<c011689e>] pvclock_clocksource_read+0x4e/0xe0
 [<c01059b3>] get_abs_timeout+0x13/0x30
 [<c0105427>] xen_force_evtchn_callback+0x17/0x30
 [<c0105d0a>] check_events+0x8/0xe
 [<c0105c73>] xen_restore_fl_direct_end+0x0/0x1
 [<c05621b3>] _spin_unlock_irqrestore+0x13/0x20
 [<c056245a>] error_code+0x72/0x78
 [<c03eed00>] do_blkif_request+0x2e0/0x360
 [<c03a7165>] blk_invoke_request_fn+0x95/0x100
 [<c03eed96>] kick_pending_request_queues+0x16/0x30
 [<c03eef3d>] blkif_interrupt+0x18d/0x1d0
 [<c0159500>] handle_IRQ_event+0x30/0x60
 [<c015b418>] handle_level_irq+0x78/0xf0
 [<c010ab17>] do_IRQ+0x77/0x90
 [<c0105d0a>] check_events+0x8/0xe
 [<c03c8e08>] xen_evtchn_do_upcall+0xe8/0x150
 [<c01091c7>] xen_do_upcall+0x7/0xc
 [<c01013a7>] _stext+0x3a7/0x1000
 [<c010547f>] xen_safe_halt+0xf/0x20
 [<c0103c50>] xen_idle+0x20/0x40
 [<c0106d2f>] cpu_idle+0x5f/0xb0
---[ end trace d5f11e988eae6396 ]---



And this one is from another user on a different host:

------------[ cut here ]------------
Kernel BUG at c03eed00 [verbose debug info unavailable]
invalid opcode: 0000 [#1] SMP
last sysfs file: /sys/class/net/tun1/type
Modules linked in: dahdi_dummy dahdi

Pid: 157, comm: kswapd0 Not tainted (2.6.28-linode15 #1)
EIP: 0061:[<c03eed00>] EFLAGS: 00010046 CPU: 0
EIP is at do_blkif_request+0x2e0/0x360
EAX: 00000001 EBX: 00000000 ECX: c1bc7240 EDX: d6132d20
ESI: d5966000 EDI: d5966040 EBP: 00000165 ESP: d61e1b90
 DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0069
Process kswapd0 (pid: 157, ti=d61e0000 task=d6106ef0 task.ti=d61e0000)
Stack:
 00000005 d5966000 00000000 d61da9e0 d596e000 d5488d0c 00000000 00000018
 d596e000 00000002 00000000 d5966000 00000000 c1bc7240 d613294c ffffffff
 d61da9e0 d596e000 00000006 00000018 c03a7165 d596e000 c03eed96 00000000
Call Trace:
 [<c03a7165>] blk_invoke_request_fn+0x95/0x100
 [<c03eed96>] kick_pending_request_queues+0x16/0x30
 [<c03eef3d>] blkif_interrupt+0x18d/0x1d0
 [<c03a514d>] elv_next_request+0x1d/0x170
 [<c0159500>] handle_IRQ_event+0x30/0x60
 [<c015b418>] handle_level_irq+0x78/0xf0
 [<c010ab17>] do_IRQ+0x77/0x90
 [<c05621b3>] _spin_unlock_irqrestore+0x13/0x20
 [<c03c8e08>] xen_evtchn_do_upcall+0xe8/0x150
 [<c01091c7>] xen_do_upcall+0x7/0xc
 [<c0101227>] _stext+0x227/0x1000
 [<c0105427>] xen_force_evtchn_callback+0x17/0x30
 [<c0182687>] kmem_cache_alloc+0x57/0xb0
 [<c016140d>] mempool_alloc+0x2d/0xe0
 [<c016140d>] mempool_alloc+0x2d/0xe0
 [<c01774dc>] try_to_unmap_one+0x8c/0x240
 [<c01a737b>] bvec_alloc_bs+0x7b/0x140
 [<c01a7641>] bio_alloc_bioset+0x51/0xe0
 [<c01a773b>] bio_alloc+0xb/0x20
 [<c017ae20>] get_swap_bio+0x20/0xa0
 [<c017afdb>] swap_writepage+0x4b/0xc0
 [<c017b050>] end_swap_bio_write+0x0/0x80
 [<c01692cd>] shrink_page_list+0x51d/0x650
 [<c016550f>] determine_dirtyable_memory+0x1f/0x90
 [<c0165598>] get_dirty_limits+0x18/0x2e0
 [<c0169837>] shrink_zone+0x437/0x860
 [<c0142117>] getnstimeofday+0x37/0xe0
 [<c0562025>] _spin_lock+0x5/0x10
 [<c0280215>] nfs_access_cache_shrinker+0xa5/0x1f0
 [<c01b9e59>] mb_cache_shrink_fn+0x59/0x100
 [<c016a7f8>] kswapd+0x4e8/0x500
 [<c0168220>] isolate_pages_global+0x0/0x220
 [<c013c3c0>] autoremove_wake_function+0x0/0x40
 [<c011ec00>] complete+0x40/0x60
 [<c016a310>] kswapd+0x0/0x500
 [<c013c0a2>] kthread+0x42/0x70
 [<c013c060>] kthread+0x0/0x70
 [<c0109177>] kernel_thread_helper+0x7/0x10
Code: 2c 8d 54 03 40 8d 44 0e 54 b9 6c 00 00 00 e8 68 a5 fc ff 8b 44 24 3c e8 cf 92 fd ff 83 44 24 18 01 e9 40 fd ff ff 0f 0b eb fe 90 <0f> 0b eb fe 8b 44 24 20 ba 10 ea 3e c0 8b 4c 24 20 c7 04 24 0b
EIP: [<c03eed00>] do_blkif_request+0x2e0/0x360 SS:ESP 0069:d61e1b90
Kernel panic - not syncing: Fatal exception in interrupt
------------[ cut here ]------------
WARNING: at kernel/smp.c:333 smp_call_function_mask+0x1cb/0x1d0()
Modules linked in: dahdi_dummy dahdi
Pid: 157, comm: kswapd0 Tainted: G      D    2.6.28-linode15 #1
Call Trace:
 [<c0128aaf>] warn_on_slowpath+0x5f/0x90
 [<c03b92c6>] memmove+0x36/0x40
 [<c03dd10a>] scrup+0x7a/0xe0
 [<c0140967>] atomic_notifier_call_chain+0x17/0x20
 [<c03dd18f>] notify_update+0x1f/0x30
 [<c03dd41a>] vt_console_print+0x20a/0x2d0
 [<c0562183>] _spin_lock_irqsave+0x33/0x50
 [<c05621b3>] _spin_unlock_irqrestore+0x13/0x20
 [<c01291dc>] release_console_sem+0x19c/0x1e0
 [<c01295b0>] vprintk+0x170/0x350
 [<c014a45b>] smp_call_function_mask+0x1cb/0x1d0
 [<c0106000>] stop_self+0x0/0x30
 [<c0562183>] _spin_lock_irqsave+0x33/0x50
 [<c01324a7>] lock_timer_base+0x27/0x60
 [<c05621b3>] _spin_unlock_irqrestore+0x13/0x20
 [<c03df146>] do_unblank_screen+0x16/0x130
 [<c014a474>] smp_call_function+0x14/0x20
 [<c0128b3e>] panic+0x4e/0x100
 [<c010ac6c>] oops_end+0x8c/0xa0
 [<c0109b80>] do_invalid_op+0x0/0xa0
 [<c0109bff>] do_invalid_op+0x7f/0xa0
 [<c03eed00>] do_blkif_request+0x2e0/0x360
 [<c01055d5>] get_runstate_snapshot+0x75/0x90
 [<c0105a2f>] xen_sched_clock+0x1f/0x80
 [<c014135b>] __update_sched_clock+0x2b/0x140
 [<c011689e>] pvclock_clocksource_read+0x4e/0xe0
 [<c0562183>] _spin_lock_irqsave+0x33/0x50
 [<c05621b3>] _spin_unlock_irqrestore+0x13/0x20
 [<c056245a>] error_code+0x72/0x78
 [<c03eed00>] do_blkif_request+0x2e0/0x360
 [<c03a7165>] blk_invoke_request_fn+0x95/0x100
 [<c03eed96>] kick_pending_request_queues+0x16/0x30
 [<c03eef3d>] blkif_interrupt+0x18d/0x1d0
 [<c03a514d>] elv_next_request+0x1d/0x170
 [<c0159500>] handle_IRQ_event+0x30/0x60
 [<c015b418>] handle_level_irq+0x78/0xf0
 [<c010ab17>] do_IRQ+0x77/0x90
 [<c05621b3>] _spin_unlock_irqrestore+0x13/0x20
 [<c03c8e08>] xen_evtchn_do_upcall+0xe8/0x150
 [<c01091c7>] xen_do_upcall+0x7/0xc
 [<c0101227>] _stext+0x227/0x1000
 [<c0105427>] xen_force_evtchn_callback+0x17/0x30
 [<c0182687>] kmem_cache_alloc+0x57/0xb0
 [<c016140d>] mempool_alloc+0x2d/0xe0
 [<c016140d>] mempool_alloc+0x2d/0xe0
 [<c01774dc>] try_to_unmap_one+0x8c/0x240
 [<c01a737b>] bvec_alloc_bs+0x7b/0x140
 [<c01a7641>] bio_alloc_bioset+0x51/0xe0
 [<c01a773b>] bio_alloc+0xb/0x20
 [<c017ae20>] get_swap_bio+0x20/0xa0
 [<c017afdb>] swap_writepage+0x4b/0xc0
 [<c017b050>] end_swap_bio_write+0x0/0x80
 [<c01692cd>] shrink_page_list+0x51d/0x650
 [<c016550f>] determine_dirtyable_memory+0x1f/0x90
 [<c0165598>] get_dirty_limits+0x18/0x2e0
 [<c0169837>] shrink_zone+0x437/0x860
 [<c0142117>] getnstimeofday+0x37/0xe0
 [<c0562025>] _spin_lock+0x5/0x10
 [<c0280215>] nfs_access_cache_shrinker+0xa5/0x1f0
 [<c01b9e59>] mb_cache_shrink_fn+0x59/0x100
 [<c016a7f8>] kswapd+0x4e8/0x500
 [<c0168220>] isolate_pages_global+0x0/0x220
 [<c013c3c0>] autoremove_wake_function+0x0/0x40
 [<c011ec00>] complete+0x40/0x60
 [<c016a310>] kswapd+0x0/0x500
 [<c013c0a2>] kthread+0x42/0x70
 [<c013c060>] kthread+0x0/0x70
 [<c0109177>] kernel_thread_helper+0x7/0x10
---[ end trace 505b292c57e00b05 ]---

-Chris


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.