[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [libvirt test] 92667: regressions - FAIL



On 04/27/2016 04:22 PM, Andrew Cooper wrote:
> On 27/04/2016 22:58, Jim Fehlig wrote:
>> On 04/25/2016 05:26 AM, osstest service owner wrote:
>>> flight 92667 libvirt real [real]
>>> http://logs.test-lab.xenproject.org/osstest/logs/92667/
>>>
>>> Regressions :-(
>>>
>>> Tests which did not succeed and are blocking,
>>> including tests which could not be run:
>>>  test-amd64-i386-libvirt      14 guest-saverestore         fail REGR. vs. 
>>> 91479
>>>  test-amd64-amd64-libvirt-xsm 14 guest-saverestore         fail REGR. vs. 
>>> 91479
>>>  test-amd64-amd64-libvirt-pair 21 guest-migrate/src_host/dst_host fail 
>>> REGR. vs. 91479
>>>  test-amd64-i386-libvirt-pair 21 guest-migrate/src_host/dst_host fail REGR. 
>>> vs. 91479
>>>  test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 guest-saverestore 
>>> fail REGR. vs. 91479
>>>  test-amd64-i386-libvirt-xsm  14 guest-saverestore         fail REGR. vs. 
>>> 91479
>>>  test-amd64-amd64-libvirt-vhd 13 guest-saverestore         fail REGR. vs. 
>>> 91479
>>>  test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 guest-saverestore 
>>> fail REGR. vs. 91479
>>>  test-amd64-amd64-libvirt     14 guest-saverestore         fail REGR. vs. 
>>> 91479
>> All of these save/restore and migration failures show the following error on 
>> the
>> restore side
>>
>> 2016-04-25 10:16:18 UTC libxl: error:
>> libxl_exec.c:118:libxl_report_child_exitstatus: conversion helper [26771] 
>> exited
>> with error status 1
>> 2016-04-25 10:16:18 UTC libxl: error: libxl_utils.c:507:libxl_read_exactly:
>> file/stream truncated reading ipc msg header from domain 1 save/restore 
>> helper
>> stdout pipe
>> 2016-04-25 10:16:18 UTC libxl: error:
>> libxl_exec.c:129:libxl_report_child_exitstatus: domain 1 save/restore helper
>> [26772] died due to fatal signal Terminated
>>
>> I'm not sure if this problem has already been addressed by recent
>> migration-related fixes.
> This is testing two different versions of libvirt against the same
> version of libxl.
>
> Looking at
> http://logs.test-lab.xenproject.org/osstest/logs/92667/test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm/italia0---var-log-libvirt-libxl-libxl-driver.log
>
> 2016-04-25 08:36:03 UTC xc: progress: End of stream: 0/0    0%
>
> indicates that the save side is in v2 format (which is expected).  (I
> should add at least an info print in libxl_stream_write() indicating the
> pertinent  details).
>
> On the restore side,
>
> 2016-04-25 08:36:20 UTC libxl: debug:
> libxl_stream_read.c:358:stream_header_done: Stream v2 (from legacy)
> 2016-04-25 08:36:20 UTC libxl: debug:
> libxl_stream_read.c:574:process_record: Record: 1, length 0
> 2016-04-25 08:36:20 UTC libxl: error:
> libxl_exec.c:118:libxl_report_child_exitstatus: conversion helper [3909]
> exited with error status 1
>
> which means that the restore code was told that the stream was in legacy
> format.  The legacy conversion script was forked and found that the
> stream wasn't legacy.  (I have no idea where the real error message went
> from that - it should be plumbed through into a info message, and
> definitely does work when running `xl` on the command line).
>
> I suspect this is breakage from the LIBXL_ABI_VERSION changes.
>
> Because of the short-sightest mess that legacy migration was, it is not
> possible for libxl to distinguish a legacy stream from a v2 stream in
> libxl_domain_create_restore().  The caller (i.e. libvirt) must provide
> the correct stream version in libxl_domain_restore_params.

How do I handle the case of a libvirt+Xen migV2 host migrating a domain to a
libvirt+Xen migV1 host? Do you know how that scenario is handled in xl? Or is
migrating a domain from migV2 host to migV1 host not supported?

Thanks for  the help.

Regards,
Jim
>
> ~Andrew
>


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.