[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [XTF PATCH v2 4/4] xtf-runner: regularise runner exit code
On Fri, Jul 22, 2016 at 10:49:18AM +0100, Andrew Cooper wrote: > On 22/07/16 10:43, Wei Liu wrote: > > >The script now returns the most severe result. Document the exit code in > >help string. > > > >Signed-off-by: Wei Liu <wei.liu2@xxxxxxxxxx> > >--- > > xtf-runner | 22 ++++++++++++++++++---- > > 1 file changed, 18 insertions(+), 4 deletions(-) > > > >diff --git a/xtf-runner b/xtf-runner > >index 1c96750..15b98c6 100755 > >--- a/xtf-runner > >+++ b/xtf-runner > >@@ -251,23 +251,30 @@ def run_tests(args): > > if not len(tests): > > raise RunnerError("No tests to run") > >- rc = 0 > >+ rc = all_results.index('SUCCESS') > > results = [] > > for test in tests: > > res = run_test(test) > >- if res != "SUCCESS": > >- rc = 1 > >+ res_idx = all_results.index(res); > >+ if res_idx > rc: > >+ rc = res_idx > > results.append(res) > >+ if rc == exit_code('SUCCESS'): > >+ for res in results: > >+ if res == 'SKIP': > >+ rc = exit_code('SKIP') > >+ break > > Why is this conditional needed? SKIP has index 1 so will automatically > displace SUCCESS in the change above. > Forgot to delete that hunk. I will resend. > >+ > > print "\nCombined test results:" > > for test, res in zip(tests, results): > > print "%-40s %s" % (test, res) > >- return rc > >+ return exit_code(all_results[rc]) > > def main(): > >@@ -308,6 +315,13 @@ def main(): > > " List all 'functional' or 'special' tests\n" > > " ./xtf-runner --list hvm64\n" > > " List all 'hvm64' tests\n" > >+ "\n" > >+ " Exit code for this script:\n" > >+ " 0: everythin is ok\n" > > everything Fixed. Wei. > > ~Andrew > > >+ " 1,2: reserved for python interpreter\n" > >+ " 3: test(s) are skipped\n" > >+ " 4: test(s) report error\n" > >+ " 5: test(s) report failure\n" > > ), > > ) > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |