[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [xen-unstable baseline-only test] 67721: regressions - FAIL



This run is configured for baseline tests only.

flight 67721 xen-unstable real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/67721/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-xsm      15 guest-start/debian.repeat fail REGR. vs. 67718

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-i386-pvgrub 10 guest-start                  fail   like 67718
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm 9 debian-hvm-install fail like 
67718
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 9 debian-hvm-install 
fail like 67718
 test-amd64-amd64-xl-qemut-debianhvm-amd64 9 debian-hvm-install fail like 67718
 test-amd64-amd64-xl-qemut-winxpsp3  9 windows-install          fail like 67718
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop              fail like 67718
 test-amd64-i386-xl-qemut-debianhvm-amd64  9 debian-hvm-install fail like 67718
 test-amd64-i386-qemut-rhel6hvm-intel  9 redhat-install         fail like 67718
 test-amd64-amd64-qemuu-nested-intel 16 debian-hvm-install/l1/l2 fail like 67718
 test-amd64-amd64-amd64-pvgrub 10 guest-start                  fail  like 67718
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm 9 debian-hvm-install fail like 
67718
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  9 windows-install    fail like 67718
 test-amd64-i386-xl-qemut-winxpsp3  9 windows-install           fail like 67718
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 9 debian-hvm-install fail 
like 67718

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-rumprun-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-rumprun-i386  1 build-check(1)               blocked  n/a
 build-i386-rumprun            5 rumprun-build                fail   never pass
 build-amd64-rumprun           5 rumprun-build                fail   never pass
 test-armhf-armhf-xl-xsm      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-xsm      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 11 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 guest-saverestore            fail   never pass
 test-armhf-armhf-libvirt-qcow2 11 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-qcow2 13 guest-saverestore            fail never pass
 test-armhf-armhf-xl-rtds     12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-xsm 14 guest-saverestore            fail   never pass
 test-armhf-armhf-xl-midway   12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-midway   13 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 16 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-libvirt      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      11 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 guest-saverestore            fail   never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start                  fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-stop              fail never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start                  fail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass

version targeted for testing:
 xen                  115e4c5e52c14c126cd8ae0dfe0322c95b65e3c8
baseline version:
 xen                  3ab5fb9a9eeb2b610d5d74419e0b1ffaf18484f2

Last test of basis    67718  2016-09-15 16:14:16 Z    1 days
Testing same since    67721  2016-09-16 06:17:50 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dario Faggioli <dario.faggioli@xxxxxxxxxx>
  George Dunlap <george.dunlap@xxxxxxxxxx>
  Jan Beulich <jbeulich@xxxxxxxx>
  Juergen Gross <jgross@xxxxxxxx>
  Wei Liu <wei.liu2@xxxxxxxxxx>

jobs:
 build-amd64-xsm                                              pass    
 build-armhf-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 build-amd64-rumprun                                          fail    
 build-i386-rumprun                                           fail    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm                fail    
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm                pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm                 pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-armhf-armhf-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-armhf-armhf-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvh-amd                                  fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-rumprun-amd64                               blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-i386-rumprun-i386                                 blocked 
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvh-intel                                fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-armhf-armhf-xl-midway                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-pygrub                                      pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-i386-xl-qemut-winxpsp3                            fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           pass    
 test-amd64-i386-xl-qemuu-winxpsp3                            pass    


------------------------------------------------------------
sg-report-flight on osstest.xs.citrite.net
logs: /home/osstest/logs
images: /home/osstest/images

Logs, config files, etc. are available at
    http://osstest.xs.citrite.net/~osstest/testlogs/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Push not applicable.

------------------------------------------------------------
commit 115e4c5e52c14c126cd8ae0dfe0322c95b65e3c8
Author: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
Date:   Thu Sep 15 12:35:04 2016 +0100

    xen: credit2: properly schedule migration of a running vcpu.
    
    If wanting to migrate a vcpu that is actually running,
    we need to ask the scheduler to chime in as soon as
    possible, to have the vcpu itself stopped and actually
    moved.
    
    Make sure this happens by, after setting all the relevant
    flags, raising the scheduler softirq.
    
    Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
    Acked-by: George Dunlap <george.dunlap@xxxxxxxxxx>

commit f83fc393b2bb0a8b97bca07d810684a2c709aaa8
Author: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
Date:   Thu Sep 15 12:35:03 2016 +0100

    xen: credit1: fix mask to be used for tickling in Credit1
    
    If there are idle pcpus inside the waking vcpu's
    soft-affinity mask, we should really tickle one
    of them (this is one of the purposes of the
    __runq_tickle() function itself!), not just
    any idle pcpu.
    
    The issue has been introduced in 02ea5031825d
    ("credit1: properly deal with pCPUs not in any cpupool"),
    where the usage of idle_mask is changed, without
    updating the bottom of the function, where it
    is also referenced.
    
    Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
    Acked-by: George Dunlap <george.dunlap@xxxxxxxxxx>

commit b19da0ee4f751ff628662a11b7f5d05ff4038977
Author: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
Date:   Thu Sep 15 12:35:03 2016 +0100

    xen: credit1: small optimization in Credit1's tickling logic.
    
    If, when vcpu x wakes up, there are no idle pcpus in x's
    soft-affinity, we just go ahead and look at its hard
    affinity. This basically means that, if, in __runq_tickle(),
    new_idlers_empty is true, balance_step is equal to
    CSCHED_BALANCE_HARD_AFFINITY, and that calling
    csched_balance_cpumask() for whatever vcpu, would just
    return the vcpu's cpu_hard_affinity.
    
    Therefore, don't bother calling it (it's just pure
    overhead) and use cpu_hard_affinity directly.
    
    For this very reason, this patch should only be
    a (slight) optimization, and entail no functional
    change.
    
    As a side note, it would make sense to do what the
    patch does, even if we could be inside the
    [[ new_idlers_empty && new->pri > cur->pri ]] if
    with balance_step equal to CSCHED_BALANCE_SOFT_AFFINITY.
    In fact, what is actually happening is:
     - vcpu x is waking up, and (since there aren't suitable
       idlers, and it's entitled for it) it is preempting
       vcpu y;
     - vcpu y's hard-affinity is a superset of its
       soft-affinity mask.
    
    Therefore, it makes sense to use wider possible mask,
    as by doing that, we maximize the probability of
    finding an idle pcpu in there, to which we can send
    vcpu y, which then will be able to run.
    
    While there, also fix the comment, which included
    an awkward parenthesis nesting.
    
    Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
    Acked-by: George Dunlap <george.dunlap@xxxxxxxxxx>

commit c496f489f02515b8f687dc4effe259180a49a279
Author: Juergen Gross <jgross@xxxxxxxx>
Date:   Tue Sep 6 12:51:06 2016 +0200

    libxl: add "xl qemu-monitor-command"
    
    Add a new xl command "qemu-monitor-command" to issue arbitrary commands
    to a domain's device model. Syntax is:
    
    xl qemu-monitor-command <domain> <command>
    
    The command is issued via qmp human-monitor-command command. Any
    information returned by the command is printed to stdout.
    
    Signed-off-by: Juergen Gross <jgross@xxxxxxxx>
    Reviewed-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
    Acked-by: Wei Liu <wei.liu2@xxxxxxxxxx>

commit 167291f0d6b87ccb1f1f4c4f73e9231b811ead03
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Thu Sep 15 10:07:48 2016 +0200

    x86: fold code in load_segments()
    
    No need to have the same logic twice. (Note that the type change does
    not affect the put_user() instances, as they derive their access size
    from the second [pointer] argument.)
    
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>

commit 5ae7811c5c2b94c43930858d2e2880bc10cbf242
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Thu Sep 15 10:06:56 2016 +0200

    x86/EFI: don't accept 64-bit base relocations on page tables
    
    Page tables get pre-populated with physical addresses which, due to
    living inside the Xen image, will never exceed 32 bits in width. That
    in turn results in the tool generating the relocations to produce
    32-bit relocations for them instead of the 64-bit ones needed for
    relocating virtual addresses. Hence instead of special casing page
    tables in the processing of 64-bit relocations, let's be more rigid
    and refuse them (as being indicative of something else having gone
    wrong in the build process).
    
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.