[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] pv guests die after failed migration



On Sat, 2011-10-15 at 02:18 +0100, Andreas Olsowski wrote:
> It seems this still has not made it into 4.1-testing.

I'm afraid I've not had time to "figure out how to automatically select
which guests are capable of a cooperative resume and which are not." so
it hasn't been fixed in xen-unstable either AFAIK.

I'm also still interested in confirmation to the question I asked in the
mail you just replied to.

> 
> 
> 
> root@memoryana:~# xl info |grep xen_extra
> xen_extra              : .2-rc3
> 
> root@memoryana:~# xl -vv migrate testmig netcatarina
> migration target: Ready to receive domain.
> Saving to migration stream new xl format (info 0x0/0x0/365)
> Loading new save file incoming migration stream (new xl fmt info 
> 0x0/0x0/365)
>   Savefile contains xl domain config
> xc: detail: Had 0 unexplained entries in p2m table
> xc: Saving memory: iter 0 (last sent 0 skipped 0): 133120/133120  100%
> xc: detail: delta 8283ms, dom0 86%, target 0%, sent 516Mb/s, dirtied 
> 2Mb/s 508 pages
> xc: Saving memory: iter 1 (last sent 130590 skipped 482): 133120/133120 
>   100%
> xc: detail: delta 25ms, dom0 60%, target 0%, sent 665Mb/s, dirtied 
> 44Mb/s 34 pages
> xc: Saving memory: iter 2 (last sent 508 skipped 0): 133120/133120  100% 
> 
> xc: detail: Start last iteration
> xc: detail: SUSPEND shinfo 000bee3c
> xc: detail: delta 204ms, dom0 3%, target 0%, sent 5Mb/s, dirtied 26Mb/s 
> 162 pages
> xc: Saving memory: iter 3 (last sent 34 skipped 0): 133120/133120  100%
> xc: detail: delta 1ms, dom0 0%, target 0%, sent 5308Mb/s, dirtied 
> 5308Mb/s 162 pages
> xc: detail: Total pages sent= 131294 (0.99x)
> xc: detail: (of which 0 were fixups)
> xc: detail: All memory is saved
> xc: detail: Save exit rc=0
> libxl: error: libxl.c:900:validate_virtual_disk failed to stat 
> /dev/xen-data/testmig-root: No such file or directory
> cannot add disk 0 to domain: -6
> migration target: Domain creation failed (code -3).
> libxl: error: libxl_utils.c:408:libxl_read_exactly file/stream truncated 
> reading ready message from migration receiver stream
> libxl: info: libxl_exec.c:72:libxl_report_child_exitstatus migration 
> target process [13420] exited with error status 3
> Migration failed, resuming at sender.
> root@memoryana:~# xl console testmig
> PM: freeze of devices complete after 0.099 msecs
> PM: late freeze of devices complete after 0.025 msecs
> ------------[ cut here ]------------
> kernel BUG at drivers/xen/events.c:1466!
> invalid opcode: 0000 [#1] SMP
> CPU 0
> Modules linked in:
> 
> Pid: 6, comm: migration/0 Not tainted 3.0.4-xenU #6
> RIP: e030:[<ffffffff8140d574>]  [<ffffffff8140d574>] 
> xen_irq_resume+0x224/0x370
> RSP: e02b:ffff88001f9fbce0  EFLAGS: 00010082
> RAX: ffffffffffffffef RBX: 0000000000000000 RCX: 0000000000000000
> RDX: ffff88001f809ea8 RSI: ffff88001f9fbd00 RDI: 0000000000000001
> RBP: 0000000000000010 R08: ffffffff81859a00 R09: 0000000000000000
> R10: 0000000000000000 R11: 09f911029d74e35b R12: 0000000000000000
> R13: 000000000000f0a0 R14: 0000000000000000 R15: ffff88001f9fbd00
> FS:  00007f49f928b700(0000) GS:ffff88001fec6000(0000) knlGS:0000000000000000
> CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
> CR2: 00007f89fb1a89f0 CR3: 000000001e4cf000 CR4: 0000000000002660
> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> Process migration/0 (pid: 6, threadinfo ffff88001f9fa000, task 
> ffff88001f9f7170)
> Stack:
>   ffff88001f9fbd34 ffff88001f9fbd54 0000000000000003 000000000000f100
>   0000000000000000 0000000000000003 0000000000000000 0000000000000003
>   ffff88001fa6fdb0 ffffffff8140aa20 ffffffff81859a08 0000000000000000
> Call Trace:
>   [<ffffffff8140aa20>] ? gnttab_map+0x100/0x130
>   [<ffffffff815c2765>] ? _raw_spin_lock+0x5/0x10
>   [<ffffffff81083e01>] ? cpu_stopper_thread+0x101/0x190
>   [<ffffffff8140e1f5>] ? xen_suspend+0x75/0xa0
>   [<ffffffff81083f1b>] ? stop_machine_cpu_stop+0x8b/0xd0
>   [<ffffffff81083e90>] ? cpu_stopper_thread+0x190/0x190
>   [<ffffffff81083dd0>] ? cpu_stopper_thread+0xd0/0x190
>   [<ffffffff815c0870>] ? schedule+0x270/0x6c0
>   [<ffffffff81083d00>] ? copy_pid_ns+0x2a0/0x2a0
>   [<ffffffff81065846>] ? kthread+0x96/0xa0
>   [<ffffffff815c4024>] ? kernel_thread_helper+0x4/0x10
>   [<ffffffff815c3436>] ? int_ret_from_sys_call+0x7/0x1b
>   [<ffffffff815c2be1>] ? retint_restore_args+0x5/0x6
>   [<ffffffff815c4020>] ? gs_change+0x13/0x13
> Code: e8 f2 e9 ff ff 8b 44 24 10 44 89 e6 89 c7 e8 64 e8 ff ff ff c3 83 
> fb 04 0f 84 95 fe ff ff 4a 8b 14 f5 20 95 85 81 e9 68 ff ff ff <0f> 0b 
> eb fe 0f 0b eb fe 48 8b 1d fd 00 42 00 4c 8d 6c 24 20 eb
> RIP  [<ffffffff8140d574>] xen_irq_resume+0x224/0x370
>   RSP <ffff88001f9fbce0>
> ---[ end trace 67ddba38000aae42 ]---
> 
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.