[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] xen: migration: guest kernel gets stuck because of too-early-swappness



Hi all:
  We found that guests such as RHEL6, they occasionally got stuck after 
migration.
  The stack of the stuck guest kernel is as follows:
PID: 18 TASK: ffff88007de61500 CPU: 1 COMMAND: "xenwatch"
#0 [ffff88007de62e40] schedule at ffffffff8150d692
#1 [ffff88007de62f08] io_schedule at ffffffff8150de73
#2 [ffff88007de62f28] get_request_wait at ffffffff8125e4c8
#3 [ffff88007de62fb8] blk_queue_bio at ffffffff8125e60d
#4 [ffff88007de63038] generic_make_request at ffffffff8125ccce
#5 [ffff88007de63108] submit_bio at ffffffff8125d02d
#6 [ffff88007de63158] swap_writepage at ffffffff81154374
#7 [ffff88007de63188] pageout.clone.2 at ffffffff8113205b
#8 [ffff88007de63238] shrink_page_list.clone.3 at ffffffff811326e5
#9 [ffff88007de63388] shrink_inactive_list at ffffffff81133263
#10 [ffff88007de63538] shrink_mem_cgroup_zone at ffffffff81133afe
#11 [ffff88007de63608] shrink_zone at ffffffff81133dc3
#12 [ffff88007de63678] do_try_to_free_pages at ffffffff81133f25
#13 [ffff88007de63718] try_to_free_pages at ffffffff811345f2
#14 [ffff88007de637b8] __alloc_pages_nodemask at ffffffff8112be48
#15 [ffff88007de638f8] kmem_getpages at ffffffff811669d2
#16 [ffff88007de63928] fallback_alloc at ffffffff811675ea
#17 [ffff88007de639a8] ____cache_alloc_node at ffffffff81167369
#18 [ffff88007de63a08] kmem_cache_alloc at ffffffff811682eb
#19 [ffff88007de63a48] idr_pre_get at ffffffff812786c0
#20 [ffff88007de63a78] ida_pre_get at ffffffff8127870c
#21 [ffff88007de63a98] proc_register at ffffffff811efc71
#22 [ffff88007de63ae8] proc_mkdir_mode at ffffffff811f0082
#23 [ffff88007de63b18] proc_mkdir at ffffffff811f00b6
#24 [ffff88007de63b28] register_handler_proc at ffffffff810e54fb
#25 [ffff88007de63bf8] __setup_irq at ffffffff810e2594
#26 [ffff88007de63c48] request_threaded_irq at ffffffff810e2e43
#27 [ffff88007de63ca8] serial8250_startup at ffffffff81356fac
#28 [ffff88007de63cf8] uart_resume_port at ffffffff813547be
#29 [ffff88007de63d78] serial8250_resume_port at ffffffff813567b6
#30 [ffff88007de63d98] serial_pnp_resume at ffffffff81358a58
#31 [ffff88007de63da8] pnp_bus_resume at ffffffff81311853
#32 [ffff88007de63dc8] dpm_resume_end at ffffffff813648a8
#33 [ffff88007de63e28] shutdown_handler at ffffffff81319351
#34 [ffff88007de63e68] xenwatch_thread at ffffffff8131ab1a
#35 [ffff88007de63ee8] kthread at ffffffff81096916
#36 [ffff88007de63f48] kernel_thread at ffffffff8100c0ca

  The reason we guess is that:
  1 Guests with kernel of 3.*, such as RHEL6, when they are not configured with 
CONFIG_PREEMPT , they do NOT call FREEZE/THAW before resuming disks, thus, 
kernel threads maybe active before the disks are available. We know that kernel 
threads may require/allocate memories, which may occasionally cause swappness. 
Swappness before disks get ready may cause kernel stuck.
this problem is fixed at: 
https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/commit/?h=linux-3.16.y&id=2edbf3c6af0f5f1f9d2ef00a15339c10beaff405
  2 However, even the kernel thread xenwatch itself needs to allocate memories, 
any attempt to acquire memory before the disk is resumed may cause deadlock 
shown above.

  So, how to fix the kernel stuck problem caused by too-early-swappness? Thanks 
in advance.


ZhangBo(Oscar)

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.