[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [xen-unstable test] 94050: regressions - trouble: blocked/broken/fail/pass



flight 94050 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/94050/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl            3 host-install(3)         broken REGR. vs. 94021
 test-amd64-i386-xl-xsm        3 host-install(3)         broken REGR. vs. 94021
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 3 host-install(3) broken 
REGR. vs. 94021
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 3 host-install(3) broken 
REGR. vs. 94021
 test-armhf-armhf-xl-credit2  15 guest-start/debian.repeat fail REGR. vs. 94021

Regressions which are regarded as allowable (not blocking):
 build-i386-rumpuserxen        6 xen-build                    fail   like 94021
 build-amd64-rumpuserxen       6 xen-build                    fail   like 94021
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop              fail like 94021
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop              fail like 94021
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop             fail like 94021
 test-amd64-amd64-xl-rtds      9 debian-install               fail   like 94021
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop             fail like 94021

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-rumpuserxen-amd64  1 build-check(1)               blocked n/a
 test-amd64-i386-rumpuserxen-i386  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvh-amd  11 guest-start                  fail   never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start                  fail  never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      12 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     12 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-xsm      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-xsm      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 12 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 13 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-xsm 14 guest-saverestore            fail   never pass
 test-armhf-armhf-libvirt     14 guest-saverestore            fail   never pass
 test-armhf-armhf-libvirt     12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 11 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-qcow2 13 guest-saverestore            fail never pass
 test-armhf-armhf-xl-vhd      11 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 13 guest-saverestore            fail   never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-check        fail  never pass
 test-armhf-armhf-xl          12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  c5ed88110cd1b72af643d7d9e255d587f2c90d3d
baseline version:
 xen                  c79fc6c4bee28b40948838a760b4aaadf6b5cd47

Last test of basis    94021  2016-05-11 07:32:40 Z    1 days
Testing same since    94050  2016-05-12 02:19:56 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  George Dunlap <george.dunlap@xxxxxxxxxx>
  Jan Beulich <jbeulich@xxxxxxxx>
  Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
  Olaf Hering <olaf@xxxxxxxxx>
  Paul Durrant <paul.durrant@xxxxxxxxxx>
  Wei Liu <wei.liu2@xxxxxxxxxx>

jobs:
 build-amd64-xsm                                              pass    
 build-armhf-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 build-amd64-rumpuserxen                                      fail    
 build-i386-rumpuserxen                                       fail    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           broken  
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm                pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           broken  
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm                pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm                 pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        broken  
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-armhf-armhf-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-armhf-armhf-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       broken  
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvh-amd                                  fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-rumpuserxen-amd64                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-i386-rumpuserxen-i386                             blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvh-intel                                fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     pass    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-amd64-xl-qemut-winxpsp3                           pass    
 test-amd64-i386-xl-qemut-winxpsp3                            pass    
 test-amd64-amd64-xl-qemuu-winxpsp3                           pass    
 test-amd64-i386-xl-qemuu-winxpsp3                            pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-step test-amd64-i386-xl host-install(3)
broken-step test-amd64-i386-xl-xsm host-install(3)
broken-step test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm host-install(3)
broken-step test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 
host-install(3)

Not pushing.

------------------------------------------------------------
commit c5ed88110cd1b72af643d7d9e255d587f2c90d3d
Author: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Date:   Wed May 11 09:59:08 2016 -0400

    xsplice: Unmask (aka reinstall NMI handler) if we need to abort.
    
    If we have to abort in xsplice_spin() we end following
    the goto abort. But unfortunataly we neglected to unmask.
    This patch fixes that.
    
    Reported-by: Martin Pohlack <mpohlack@xxxxxxxxxx>
    Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Reviewed-by: Wei Liu <wei.liu2@xxxxxxxxxx>
    Release-acked-by: Wei Liu <wei.liu2@xxxxxxxxxx>

commit 1c7fa3dc039487d18ad0c6fb6b773c831dca5e5d
Author: George Dunlap <george.dunlap@xxxxxxxxxx>
Date:   Wed May 11 12:14:45 2016 +0100

    tools/xendomains: Create lockfile on start unconditionally
    
    At the moment, the xendomains init script will only create a lockfile
    if when started, it actually does something -- either tries to restore
    a previously saved domain as a result of XENDOMAINS_RESTORE, or tries
    to create a domain as a result of XENDOMAINS_AUTO.
    
    RedHat-based SYSV init systems try to only call "${SERVICE} shutdown"
    on systems which actually have an actively running component; and they
    use the existence of /var/lock/subsys/${SERVICE} to determine which
    systems are running.
    
    This means that at the moment, on RedHat-based SYSV systems (such as
    CentOS 6), if you enable xendomains, and have XENDOMAINS_RESTORE set
    to "true", but don't happen to start a VM, then your running VMs will
    not be suspended on shutdown.
    
    Since the lockfile doesn't really have any other effect than to
    prevent duplicate starting, just create it unconditionally every time
    we start the xendomains script.
    
    The other option would have been to touch the lockfile if
    XENDOMAINS_RESTORE was true regardless of whether there were any
    domains to be restored.  But this would mean that if you started with
    the xendomains script active but XENDOMAINS_RESTORE set to "false",
    and then changed it to "true", then xendomains would still not run the
    next time you shut down.  This seems to me to violate the principle of
    least surprise.
    
    Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxx>
    Acked-by: Wei Liu <wei.liu2@xxxxxxxxxx>
    Acked-by: Olaf Hering <olaf@xxxxxxxxx>
    Release-acked-by: Wei Liu <wei.liu2@xxxxxxxxxx>

commit 1209ba4218ae03067c4d42392229263750efe814
Author: George Dunlap <george.dunlap@xxxxxxxxxx>
Date:   Wed May 11 12:14:44 2016 +0100

    hotplug: Fix xendomains lock path for RHEL-based systems
    
    Commit c996572 changed the LOCKFILE path from a check between two
    hardcoded paths (/var/lock/subsys/ or /var/lock) to using the
    XEN_LOCK_DIR variable designated at configure time.  Since
    XEN_LOCK_DIR doesn't (and shouldn't) have the 'subsys' postfix, this
    effectively moves all the lock files by default to /var/lock instead.
    
    Unfortunately, this breaks xendomains on RedHat-based SYSV init
    systems.  RedHat-based SYSV init systems try to only call "${SERVICE}
    shutdown" on systems which actually have an actively running
    component; and they use the existence of /var/lock/subsys/${SERVICE}
    to determine which systems are running.
    
    Changing XEN_LOCK_DIR to /var/lock/subsys is not suitable, as only
    system services like xendomains should create lockfiles there; other
    locks (such as the console locks) should be created in /var/lock
    instead.
    
    Instead, re-instate the check for the subsys/ subdirectory of the lock
    directory in the xendomains script.
    
    Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxx>
    Acked-by: Wei Liu <wei.liu2@xxxxxxxxxx>
    Acked-by: Olaf Hering <olaf@xxxxxxxxx>
    Release-acked-by: Wei Liu <wei.liu2@xxxxxxxxxx>

commit 46ed6a814c2867260c0ebd9a7399466c801637be
Author: Paul Durrant <paul.durrant@xxxxxxxxxx>
Date:   Mon May 9 17:43:14 2016 +0100

    tools: configure correct trace backend for QEMU
    
    Newer versions of the QEMU source have replaced the 'stderr' trace
    backend with 'log'. This patch adjusts the tools Makefile to test for
    the 'log' backend and specify it if it is available.
    
    Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
    Acked-by: Wei Liu <wei.liu2@xxxxxxxxxx>
    Release-acked-by: Wei Liu <wei.liu2@xxxxxxxxxx>

commit a6abcd8f758d968f6eb4d93ab37db4388eb9df7e
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Wed May 11 09:47:21 2016 +0200

    x86: correct remaining extended CPUID level checks
    
    We should consistently check the upper 16 bits to be equal 0x8000 and
    only then the full value to be >= the desired level.
    
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Release-acked-by: Wei Liu <wei.liu2@xxxxxxxxxx>

commit a24edf49f5195fc3ec54584e42a6cdef6d248221
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Wed May 11 09:46:43 2016 +0200

    x86: cap address bits CPUID output
    
    Don't use more or report more to guests than we are capable of
    handling.
    
    At once
    - correct the involved extended CPUID level checks,
    - simplify the code in hvm_cpuid() and mtrr_top_of_ram().
    
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Release-acked-by: Wei Liu <wei.liu2@xxxxxxxxxx>

commit 5590bd17c474b3cff4a86216b17349a3045f6158
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Wed May 11 09:46:02 2016 +0200

    XSA-77: widen scope again
    
    As discussed on the hackathon, avoid us having to issue security
    advisories for issues affecting only heavily disaggregated tool stack
    setups, which no-one appears to use (or else they should step up to get
    things into shape).
    
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.