WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] [xen-unstable test] 5762: regressions - FAIL

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] [xen-unstable test] 5762: regressions - FAIL
From: xen.org <ian.jackson@xxxxxxxxxxxxx>
Date: Tue, 15 Feb 2011 03:04:59 +0000
Cc: ian.jackson@xxxxxxxxxxxxx
Delivery-date: Mon, 14 Feb 2011 19:05:59 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
flight 5762 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/5762/

Regressions :-(

Tests which did not succeed and are blocking:
 test-i386-xcpkern-i386-pair  16 guest-start                fail REGR. vs. 5740

Tests which did not succeed, but are not blocking,
including regressions (tests previously passed) regarded as allowable:
 test-amd64-amd64-win         16 leak-check/check             fail   never pass
 test-amd64-amd64-xl-win      13 guest-stop                   fail   never pass
 test-amd64-i386-rhel6hvm-amd  8 guest-saverestore            fail   never pass
 test-amd64-i386-rhel6hvm-intel  8 guest-saverestore            fail never pass
 test-amd64-i386-win-vcpus1   16 leak-check/check             fail   never pass
 test-amd64-i386-win          16 leak-check/check             fail   never pass
 test-amd64-i386-xl-win-vcpus1 13 guest-stop                   fail  never pass
 test-amd64-xcpkern-i386-rhel6hvm-amd  8 guest-saverestore      fail never pass
 test-amd64-xcpkern-i386-rhel6hvm-intel  8 guest-saverestore    fail never pass
 test-amd64-xcpkern-i386-win  16 leak-check/check             fail   never pass
 test-amd64-xcpkern-i386-xl-win 13 guest-stop                   fail never pass
 test-i386-i386-win           16 leak-check/check             fail   never pass
 test-i386-i386-xl-win        13 guest-stop                   fail   never pass
 test-i386-xcpkern-i386-win   16 leak-check/check             fail   never pass

version targeted for testing:
 xen                  137ad3347504
baseline version:
 xen                  c64dcc4d2eca

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@xxxxxxxxxx>
  Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
  Jan Beulich <jbeulich@xxxxxxxxxx>
  Juergen Gross <juergen.gross@xxxxxxxxxxxxxx>
  Keir Fraser <keir@xxxxxxx>
  Patrick Scharrenberg <pittipatti@xxxxxx>
  Shriram Rajagopalan <rshriram@xxxxxxxxx>
  Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
  Tim Deegan <Tim.Deegan@xxxxxxxxxx>
  Wei Gang <gang.wei@xxxxxxxxx>
------------------------------------------------------------

jobs:
 build-i386-xcpkern                                           pass     
 build-amd64                                                  pass     
 build-i386                                                   pass     
 build-amd64-oldkern                                          pass     
 build-i386-oldkern                                           pass     
 build-amd64-pvops                                            pass     
 build-i386-pvops                                             pass     
 test-amd64-amd64-xl                                          pass     
 test-amd64-i386-xl                                           pass     
 test-i386-i386-xl                                            pass     
 test-amd64-xcpkern-i386-xl                                   pass     
 test-i386-xcpkern-i386-xl                                    pass     
 test-amd64-i386-rhel6hvm-amd                                 fail     
 test-amd64-xcpkern-i386-rhel6hvm-amd                         fail     
 test-amd64-i386-xl-credit2                                   pass     
 test-amd64-xcpkern-i386-xl-credit2                           pass     
 test-amd64-i386-rhel6hvm-intel                               fail     
 test-amd64-xcpkern-i386-rhel6hvm-intel                       fail     
 test-amd64-i386-xl-multivcpu                                 pass     
 test-amd64-xcpkern-i386-xl-multivcpu                         pass     
 test-amd64-amd64-pair                                        pass     
 test-amd64-i386-pair                                         pass     
 test-i386-i386-pair                                          pass     
 test-amd64-xcpkern-i386-pair                                 pass     
 test-i386-xcpkern-i386-pair                                  fail     
 test-amd64-amd64-pv                                          pass     
 test-amd64-i386-pv                                           pass     
 test-i386-i386-pv                                            pass     
 test-amd64-xcpkern-i386-pv                                   pass     
 test-i386-xcpkern-i386-pv                                    pass     
 test-amd64-i386-win-vcpus1                                   fail     
 test-amd64-i386-xl-win-vcpus1                                fail     
 test-amd64-amd64-win                                         fail     
 test-amd64-i386-win                                          fail     
 test-i386-i386-win                                           fail     
 test-amd64-xcpkern-i386-win                                  fail     
 test-i386-xcpkern-i386-win                                   fail     
 test-amd64-amd64-xl-win                                      fail     
 test-i386-i386-xl-win                                        fail     
 test-amd64-xcpkern-i386-xl-win                               fail     


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
changeset:   22919:137ad3347504
tag:         tip
user:        Ian Campbell <ian.campbell@xxxxxxxxxx>
date:        Mon Feb 14 17:02:55 2011 +0000
    
    libxl: fix migrate for HVM guests
    
    Prior to 22909:6868f7f3ab3f libxl would loop waiting simultaneously
    for the domain the acknowledge a PV suspend request (by clearing the
    XenStore node) and for the domain to actually suspend. For HVM guests
    without PV drivers this same loop was simply waiting for the domain to
    suspend.
    
    In 22909:6868f7f3ab3f the original loop was split into two loops
    (first waiting for the acknowledgement and then for the actual
    suspend). This caused libxl to incorrectly wait for an HVM guest
    without PV drivers to acknowledge the XenStore request, which is not
    something it would ever do.
    
    Fix this by only waiting for an acknowledgement from a guest which
    contains PV drivers.
    
    Previously we were also making the request regardless of whether the
    guest had PV drivers, change that to only make the request if the
    guest has PV drivers.
    
    Lastly there is no need to sample HVM_PARAM_ACPI_S_STATE twice and not
    doing so simplifies the test for PVHVM vs. normal HVM guests.
    
    Tested with:
        Windows with GPL PV drivers (event channel suspend mode)
        Windows without PV drivers (xc_domain_shutdown mode)
        Linux PV (PV with XenBus control node mode)
        Linux HVM (PVHVM with XenBus control node mode (*))
        Linux HVM (xc_domain_shutdown mode)
    
    (*) In this case the kernel didn't actually suspend, due to:
        PM: Device input1 failed to suspend: error -22
        xen suspend: dpm_suspend_start -22
        which may be a misconfiguration in my setup or may be a kernel
        bug, but the libxl side dealt with this as gracefully as it could.
    
    Signed-off-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
    Acked-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
    Committed-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
    
    
changeset:   22918:f8097fe3cf05
user:        Juergen Gross <juergen.gross@xxxxxxxxxxxxxx>
date:        Mon Feb 14 16:56:20 2011 +0000
    
    xl: Support more than 32 vcpus for xl vcpu-set
    
    xl vcpu-set currently uses a 32 bit mask for specifying which cpus are to be
    set online. This restricts the number of cpus supported by this command.
    
    The patch switches to libxl_cpumap, the interface of libxl_set_vcpuonline()
    is changed accordingly.
    
    Signed-off-by: Juergen Gross <juergen.gross@xxxxxxxxxxxxxx>
    Acked-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
    Committed-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
    
    
changeset:   22917:0e7a10dc7617
user:        Juergen Gross <juergen.gross@xxxxxxxxxxxxxx>
date:        Mon Feb 14 16:55:00 2011 +0000
    
    xl: correct xl cpupool-create with extra parameters
    
    xl cpupool-create won't take always extra parameters specified on the 
command
    line, as a 0-byte is missing at the end of the configuration file contents.
    
    Signed-off-by: Juergen Gross <juergen.gross@xxxxxxxxxxxxxx>
    Acked-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
    Committed-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
    
    
changeset:   22916:51bd89ca047d
user:        Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
date:        Mon Feb 14 16:49:03 2011 +0000
    
    libxl: implement trigger s3resume
    
    This is the equivalent of xm trigger s3resume and it is implemented the
    same way: using an ACPI state change.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
    Tested-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
    Acked-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
    Committed-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
    
    
changeset:   22915:af84691a6cf9
user:        Wei Gang <gang.wei@xxxxxxxxx>
date:        Mon Feb 14 10:41:12 2011 +0000
    
    x86: Fix S3 resume for HPET MSI IRQ case
    
    Jan Beulich found that for S3 resume on platforms without ARAT feature
    but with MSI capable HPET, request_irq() will be called in
    hpet_setup_msi_irq() for irq already setup(no release_irq() called
    during S3 suspend), so that always falling back to using
    legacy_hpet_event.
    
    Fix it by conditional calling request_irq() for 4.1. Planned to split
    the S3 resume path from booting path post 4.1, as Jan suggested.
    
    Signed-off-by: Wei Gang <gang.wei@xxxxxxxxx>
    Acked-by: Jan Beulich <jbeulich@xxxxxxxxxx>
    
    
changeset:   22914:218b5fa834aa
user:        Ian Campbell <ian.campbell@xxxxxxxxxx>
date:        Mon Feb 14 10:39:34 2011 +0000
    
    MAINTAINERS: Update Remus paths
    
    Signed-off-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
    Acked-by: Shriram Rajagopalan <rshriram@xxxxxxxxx>
    
    
changeset:   22913:4ea36cce2519
user:        Keir Fraser <keir@xxxxxxx>
date:        Mon Feb 14 09:10:22 2011 +0000
    
    MAINTAINERS: Add Remus maintainer.
    
    Signed-off-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
    
    
changeset:   22912:157874138879
user:        Keir Fraser <keir@xxxxxxx>
date:        Mon Feb 14 09:05:14 2011 +0000
    
    Revert 22900:a0ef80c99264
    
    The check it adds is already present in the function's sole caller.
    
    Signed-off-by: Keir Fraser <keir@xxxxxxx>
    
    
changeset:   22911:67f2fed57034
user:        Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>
date:        Fri Feb 11 18:22:37 2011 +0000
    
    QEMU_TAG update
    
    
changeset:   22910:d4bc41a8cecb
user:        Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>
date:        Fri Feb 11 18:21:35 2011 +0000
    
    tools/hotplug/Linux: Use correct device name for vifs in setup scripts
    
    In vif-common.sh, set the shell variable "dev" to the new interface
    name when interfaces are renamed, and consistently use this variable
    in all the vif scripts.
    
    This fixes hotplug of renamed interfaces.
    
    From: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
    From: Patrick Scharrenberg <pittipatti@xxxxxx>
    Signed-off-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
    Signed-off-by: Patrick Scharrenberg <pittipatti@xxxxxx>
    Committed-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
    
    
changeset:   22909:6868f7f3ab3f
user:        Ian Campbell <ian.campbell@xxxxxxxxxx>
date:        Fri Feb 11 17:57:32 2011 +0000
    
    libxl/xl: improve behaviour when guest fails to suspend itself.
    
    The PV suspend protocol requires guest co-operating whereby the guest
    must respond to a suspend request written to the xenstore control node
    by clearing the node and then making a suspend hypercall.
    
    Currently when a guest fails to do this libxl times out and returns
    a generic failure code to the caller.
    
    In response to this failure xl attempts to resume the guest. However
    if the guest has not responded to the suspend request then the is no
    guarantee that the guest has made the suspend hypercall (in fact it is
    quite unlikely). Since the resume process attempts to modify the
    return value of the hypercall (to indicate a cancelled suspend) this
    results in the guest eax/rax register being corrupted!
    
    To fix this change libxl to do the following:
       * Wait for the guest to acknowledge the suspend request.
         - on timeout cancel the suspend request.
           - if cancellation is successful then return a new error code to
             indicate that the guest is not responding.
           - if the cancel does not succeed then we raced with the guest
             which actually did acknowledge at the last minute, so
             continue.
       * Wait for the guest to suspend.
         - on timeout return the standard error code as before
       * Guest successfully suspended, return success.
    
    Lastly in xl do not attempt to resume a guest if it has not responded
    to the suspend request.
    
    Tested by live migration of PVops kernels which either ignore the
    suspend request, have already crashed and those which suspend/resume
    correctly. In the first two cases the source domain is left alone (and
    continues to function in the first case) and in the third the
    migration is successful.
    
    Signed-off-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
    Acked-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
    Committed-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
    
    
changeset:   22908:c4b843d0b5f4
user:        Ian Campbell <ian.campbell@xxxxxxxxxx>
date:        Fri Feb 11 17:56:24 2011 +0000
    
    libxl: allow guest to write "control/shutdown" xenstore node.
    
    The PV shutdown/reboot/suspend protocol requires that the guest
    acknowledge a request by clearing the node therefore it is necessary
    to allow the guest to write to the node.
    
    Currently libxl is quite relaxed about this protocol and doesn't
    reeally seem to mind that the guest is unable to write the node to
    perform the acknowledgement. However in a followup patch libxl needs
    to be able to detect that a guest has acknowledged a suspend request.
    
    A side effect of this change is that an empty "control/shutdown" node
    is created upon domain creation instead of only being created when a
    shutdown/reboot/suspend is requested. This should not (and does not
    in my tests) have any negative impact on the guest.
    
    Signed-off-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
    Committed-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
    
    
changeset:   22907:9280f1674705
user:        Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
date:        Fri Feb 11 17:53:08 2011 +0000
    
    libxl: do not call libxl__file_reference_unmap twice
    
    Fix double free due to libxl__file_reference_unmap(&info->kernel) called
    multiple times: first at the end of libxl__domain_build and then in
    libxl_domain_build_info_destroy.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
    Acked-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
    Committed-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
    
    
changeset:   22906:4376c4f0196f
user:        Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>
date:        Fri Feb 11 17:49:13 2011 +0000
    
    libxc: increase lzma max memory constant to 128Mby
    
    According to lzma's configure.ac (!) the minimum memory limit to cope
    with arbitrary input is 128Mby (!)
    
    This is obviously an unreasonable amount of memory for this kind of
    task, but we need to increase the constant limit for it not to
    randomly fail.  So do so.
    
    Signed-off-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
    
    
changeset:   22905:6c22ae0f6540
user:        Tim Deegan <Tim.Deegan@xxxxxxxxxx>
date:        Fri Feb 11 16:51:44 2011 +0000
    
    x86/mm: fix typo in 22897:21df67ee7040
    that caused the wrong page to be freed.
    
    Signed-off-by: Tim Deegan <Tim.Deegan@xxxxxxxxxx>
    
    
changeset:   22904:c64dcc4d2eca
user:        Keir Fraser <keir@xxxxxxx>
date:        Thu Feb 10 17:24:41 2011 +0000
    
    Update Xen evrsion to 4.1.0-rc5-pre
    
    
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>
  • [Xen-devel] [xen-unstable test] 5762: regressions - FAIL, xen . org <=