[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [xen-unstable test] 6663: regressions - FAIL



flight 6663 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/6663/

Regressions :-(

Tests which did not succeed and are blocking:
 test-amd64-amd64-pair        16 guest-start                fail REGR. vs. 6658
 test-amd64-amd64-pv           9 guest-start                fail REGR. vs. 6658
 test-amd64-amd64-win          7 windows-install            fail REGR. vs. 6658
 test-amd64-amd64-xl           4 xen-install                fail REGR. vs. 6658
 test-amd64-i386-pair         16 guest-start                fail REGR. vs. 6658
 test-amd64-i386-pv            9 guest-start                fail REGR. vs. 6658
 test-amd64-i386-win-vcpus1    7 windows-install            fail REGR. vs. 6658
 test-amd64-i386-win           7 windows-install            fail REGR. vs. 6658
 test-amd64-i386-xl-credit2   10 guest-saverestore          fail REGR. vs. 6658
 test-amd64-xcpkern-i386-pair 16 guest-start                fail REGR. vs. 6658
 test-amd64-xcpkern-i386-pv    9 guest-start                fail REGR. vs. 6658
 test-amd64-xcpkern-i386-win   7 windows-install            fail REGR. vs. 6658
 test-i386-i386-pair          16 guest-start                fail REGR. vs. 6658
 test-i386-i386-pv             9 guest-start                fail REGR. vs. 6658
 test-i386-i386-win            7 windows-install            fail REGR. vs. 6658
 test-i386-i386-xl-win         7 windows-install            fail REGR. vs. 6658
 test-i386-xcpkern-i386-pair  16 guest-start                fail REGR. vs. 6658
 test-i386-xcpkern-i386-pv     9 guest-start                fail REGR. vs. 6658
 test-i386-xcpkern-i386-win    4 xen-install                fail REGR. vs. 6658

Tests which did not succeed, but are not blocking,
including regressions (tests previously passed) regarded as allowable:
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-i386-rhel6hvm-amd  8 guest-saverestore            fail   never pass
 test-amd64-i386-rhel6hvm-intel  8 guest-saverestore            fail never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-xcpkern-i386-rhel6hvm-amd  8 guest-saverestore      fail never pass
 test-amd64-xcpkern-i386-rhel6hvm-intel  8 guest-saverestore    fail never pass
 test-amd64-xcpkern-i386-xl-win 13 guest-stop                   fail never pass

version targeted for testing:
 xen                  c9f745c153ec
baseline version:
 xen                  a65612bcbb92

------------------------------------------------------------
People who touched revisions under test:
  Gang Wei <gang.wei@xxxxxxxxx>
  Ian Campbell <ian.campbell@xxxxxxxxxx>
  Jan Beulich <jbeulich@xxxxxxxxxx>
  Keir Fraser <keir@xxxxxxx>
------------------------------------------------------------

jobs:
 build-i386-xcpkern                                           pass     
 build-amd64                                                  pass     
 build-i386                                                   pass     
 build-amd64-oldkern                                          pass     
 build-i386-oldkern                                           pass     
 build-amd64-pvops                                            pass     
 build-i386-pvops                                             pass     
 test-amd64-amd64-xl                                          fail     
 test-amd64-i386-xl                                           pass     
 test-i386-i386-xl                                            pass     
 test-amd64-xcpkern-i386-xl                                   pass     
 test-i386-xcpkern-i386-xl                                    pass     
 test-amd64-i386-rhel6hvm-amd                                 fail     
 test-amd64-xcpkern-i386-rhel6hvm-amd                         fail     
 test-amd64-i386-xl-credit2                                   fail     
 test-amd64-xcpkern-i386-xl-credit2                           pass     
 test-amd64-i386-rhel6hvm-intel                               fail     
 test-amd64-xcpkern-i386-rhel6hvm-intel                       fail     
 test-amd64-i386-xl-multivcpu                                 pass     
 test-amd64-xcpkern-i386-xl-multivcpu                         pass     
 test-amd64-amd64-pair                                        fail     
 test-amd64-i386-pair                                         fail     
 test-i386-i386-pair                                          fail     
 test-amd64-xcpkern-i386-pair                                 fail     
 test-i386-xcpkern-i386-pair                                  fail     
 test-amd64-amd64-pv                                          fail     
 test-amd64-i386-pv                                           fail     
 test-i386-i386-pv                                            fail     
 test-amd64-xcpkern-i386-pv                                   fail     
 test-i386-xcpkern-i386-pv                                    fail     
 test-amd64-i386-win-vcpus1                                   fail     
 test-amd64-i386-xl-win-vcpus1                                fail     
 test-amd64-amd64-win                                         fail     
 test-amd64-i386-win                                          fail     
 test-i386-i386-win                                           fail     
 test-amd64-xcpkern-i386-win                                  fail     
 test-i386-xcpkern-i386-win                                   fail     
 test-amd64-amd64-xl-win                                      fail     
 test-i386-i386-xl-win                                        fail     
 test-amd64-xcpkern-i386-xl-win                               fail     


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   23098:c9f745c153ec
tag:         tip
user:        Keir Fraser <keir@xxxxxxx>
date:        Fri Mar 25 21:59:20 2011 +0000
    
    tools: vnet: Remove
    
    Build has been broken since at least 18969:d6889b3b6423 (early
    2009) and it has been unhooked from the top level build since forever
    AFAICT. The last actual development (as opposed to tree wide
    cleanups and build fixes) appears to have been 11594:6d7bba6443ef in
    2006. The functionality of vnet has apparently been superceded by
    VLANs, ebtables, Ethernet-over-IP etc all of which are well integrated
    with upstream kernels and distros.
    
    Signed-off-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
    Signed-off-by: Keir Fraser <keir@xxxxxxx>
    
    
changeset:   23097:2aeebd5cbbad
user:        Keir Fraser <keir@xxxxxxx>
date:        Fri Mar 25 21:47:57 2011 +0000
    
    Remove unmaintained Access Control Module (ACM) from hypervisor.
    
    Signed-off-by: Keir Fraser <keir@xxxxxxx>
    
    
changeset:   23096:a65612bcbb92
user:        Jan Beulich <jbeulich@xxxxxxxxxx>
date:        Fri Mar 25 09:03:17 2011 +0000
    
    x86/hpet: eliminate cpumask_lock
    
    According to the (now getting removed) comment in struct
    hpet_event_channel, this was to prevent accessing a CPU's
    timer_deadline after it got cleared from cpumask. This can be done
    without a lock altogether - hpet_broadcast_exit() can simply clear
    the bit, and handle_hpet_broadcast() can read timer_deadline before
    looking at the mask a second time (the cpumask bit was already
    found set by the surrounding loop).
    
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxxxx>
    Acked-by: Gang Wei <gang.wei@xxxxxxxxx>
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.