[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [xen-4.10-testing bisection] complete build-arm64



branch xen-4.10-testing
xenbranch xen-4.10-testing
job build-arm64
testid xen-build

Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  d86c9aeae6cb753e931e00f7ee020d73df9070c0
  Bug not present: 45197905fc5c2151960dfe6f039a5a2e14f0b4aa
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/128610/


  commit d86c9aeae6cb753e931e00f7ee020d73df9070c0
  Author: Dario Faggioli <dfaggioli@xxxxxxxx>
  Date:   Mon Oct 8 14:39:46 2018 +0200
  
      xen: sched/Credit2: fix bug when moving CPUs between two Credit2 cpupools
      
      Whether or not a CPU is assigned to a runqueue (and, if yes, to which
      one) within a Credit2 scheduler instance must be both a per-cpu and
      per-scheduler instance one.
      
      In fact, when we move a CPU between cpupools, we first setup its per-cpu
      data in the new pool, and then cleanup its per-cpu data from the old
      pool. In Credit2, when there currently is no per-scheduler, per-cpu
      data (as the cpu-to-runqueue map is stored on a per-cpu basis only),
      this means that the cleanup of the old per-cpu data can mess with the
      new per-cpu data, leading to crashes like this:
      
      https://www.mail-archive.com/xen-devel@xxxxxxxxxxxxxxxxxxxx/msg23306.html
      https://www.mail-archive.com/xen-devel@xxxxxxxxxxxxxxxxxxxx/msg23350.html
      
      Basically, when csched2_deinit_pdata() is called for CPU 13, for fully
      removing the CPU from Pool-0, per_cpu(13,runq_map) already contain the
      id of the runqueue to which the CPU has been assigned in the scheduler
      of Pool-1, which means wrong runqueue manipulations happen in Pool-0's
      scheduler. Furthermore, at the end of such call, that same runq_map is
      updated with -1, which is what causes the BUG_ON in csched2_schedule(),
      on CPU 13, to trigger.
      
      So, instead of reverting a2c4e5ab59d "xen: credit2: make the cpu to
      runqueue map per-cpu" (as we don't want to go back to having the huge
      array in struct csched2_private) add a per-cpu scheduler specific data
      structure, like, for instance, Credit1 has already. That (for now) only
      contains one field: the id of the runqueue the CPU is assigned to.
      
      Signed-off-by: Dario Faggioli <dfaggioli@xxxxxxxx>
      Reviewed-by: Juergen Gross <jgross@xxxxxxxx>
      Reviewed-by: George Dunlap <george.dunlap@xxxxxxxxxx>
      master commit: 6e395f477fb854f11de83a951a070d3aacb6dc59
      master date: 2018-09-18 16:50:44 +0100


For bisection revision-tuple graph see:
   
http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-4.10-testing/build-arm64.xen-build.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step 
--graph-out=/home/logs/results/bisect/xen-4.10-testing/build-arm64.xen-build 
--summary-out=tmp/128610.bisection-summary --basis-template=128108 
--blessings=real,real-bisect xen-4.10-testing build-arm64 xen-build
Searching for failure / basis pass:
 128524 fail [host=laxton1] / 128108 ok.
Failure / basis pass flights: 128524 / 128108
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 6ea4cef2bd717045ac0e84b52a5b1b7716feb0c2 
788948bebcecca69bfac47e5514f2dc351dabad9
Basis pass 6ea4cef2bd717045ac0e84b52a5b1b7716feb0c2 
0c1d5b68e27da167a51c2ea828636c14ff5c017b
Generating revisions with ./adhoc-revtuple-generator  
git://xenbits.xen.org/qemu-xen.git#6ea4cef2bd717045ac0e84b52a5b1b7716feb0c2-6ea4cef2bd717045ac0e84b52a5b1b7716feb0c2
 
git://xenbits.xen.org/xen.git#0c1d5b68e27da167a51c2ea828636c14ff5c017b-788948bebcecca69bfac47e5514f2dc351dabad9
Loaded 1001 nodes in revision graph
Searching for test results:
 128055 pass 6ea4cef2bd717045ac0e84b52a5b1b7716feb0c2 
0c1d5b68e27da167a51c2ea828636c14ff5c017b
 128108 pass 6ea4cef2bd717045ac0e84b52a5b1b7716feb0c2 
0c1d5b68e27da167a51c2ea828636c14ff5c017b
 128505 fail 6ea4cef2bd717045ac0e84b52a5b1b7716feb0c2 
61dc0159b69bd3eec109188386c8b13fbdfed7b2
 128524 fail 6ea4cef2bd717045ac0e84b52a5b1b7716feb0c2 
788948bebcecca69bfac47e5514f2dc351dabad9
 128597 pass 6ea4cef2bd717045ac0e84b52a5b1b7716feb0c2 
45197905fc5c2151960dfe6f039a5a2e14f0b4aa
 128604 fail 6ea4cef2bd717045ac0e84b52a5b1b7716feb0c2 
d86c9aeae6cb753e931e00f7ee020d73df9070c0
 128605 pass 6ea4cef2bd717045ac0e84b52a5b1b7716feb0c2 
45197905fc5c2151960dfe6f039a5a2e14f0b4aa
 128606 fail 6ea4cef2bd717045ac0e84b52a5b1b7716feb0c2 
788948bebcecca69bfac47e5514f2dc351dabad9
 128589 pass 6ea4cef2bd717045ac0e84b52a5b1b7716feb0c2 
0c1d5b68e27da167a51c2ea828636c14ff5c017b
 128608 fail 6ea4cef2bd717045ac0e84b52a5b1b7716feb0c2 
d86c9aeae6cb753e931e00f7ee020d73df9070c0
 128609 pass 6ea4cef2bd717045ac0e84b52a5b1b7716feb0c2 
45197905fc5c2151960dfe6f039a5a2e14f0b4aa
 128590 fail 6ea4cef2bd717045ac0e84b52a5b1b7716feb0c2 
61dc0159b69bd3eec109188386c8b13fbdfed7b2
 128595 pass 6ea4cef2bd717045ac0e84b52a5b1b7716feb0c2 
d091a49f89e979ca4ca7dc583c1f8ef7d1312a48
 128596 pass 6ea4cef2bd717045ac0e84b52a5b1b7716feb0c2 
54838353189600af183ef09829276162f4b5e7f9
 128610 fail 6ea4cef2bd717045ac0e84b52a5b1b7716feb0c2 
d86c9aeae6cb753e931e00f7ee020d73df9070c0
Searching for interesting versions
 Result found: flight 128055 (pass), for basis pass
 Result found: flight 128524 (fail), for basis failure
 Repro found: flight 128589 (pass), for basis pass
 Repro found: flight 128606 (fail), for basis failure
 0 revisions at 6ea4cef2bd717045ac0e84b52a5b1b7716feb0c2 
45197905fc5c2151960dfe6f039a5a2e14f0b4aa
No revisions left to test, checking graph state.
 Result found: flight 128597 (pass), for last pass
 Result found: flight 128604 (fail), for first failure
 Repro found: flight 128605 (pass), for last pass
 Repro found: flight 128608 (fail), for first failure
 Repro found: flight 128609 (pass), for last pass
 Repro found: flight 128610 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  d86c9aeae6cb753e931e00f7ee020d73df9070c0
  Bug not present: 45197905fc5c2151960dfe6f039a5a2e14f0b4aa
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/128610/


  commit d86c9aeae6cb753e931e00f7ee020d73df9070c0
  Author: Dario Faggioli <dfaggioli@xxxxxxxx>
  Date:   Mon Oct 8 14:39:46 2018 +0200
  
      xen: sched/Credit2: fix bug when moving CPUs between two Credit2 cpupools
      
      Whether or not a CPU is assigned to a runqueue (and, if yes, to which
      one) within a Credit2 scheduler instance must be both a per-cpu and
      per-scheduler instance one.
      
      In fact, when we move a CPU between cpupools, we first setup its per-cpu
      data in the new pool, and then cleanup its per-cpu data from the old
      pool. In Credit2, when there currently is no per-scheduler, per-cpu
      data (as the cpu-to-runqueue map is stored on a per-cpu basis only),
      this means that the cleanup of the old per-cpu data can mess with the
      new per-cpu data, leading to crashes like this:
      
      https://www.mail-archive.com/xen-devel@xxxxxxxxxxxxxxxxxxxx/msg23306.html
      https://www.mail-archive.com/xen-devel@xxxxxxxxxxxxxxxxxxxx/msg23350.html
      
      Basically, when csched2_deinit_pdata() is called for CPU 13, for fully
      removing the CPU from Pool-0, per_cpu(13,runq_map) already contain the
      id of the runqueue to which the CPU has been assigned in the scheduler
      of Pool-1, which means wrong runqueue manipulations happen in Pool-0's
      scheduler. Furthermore, at the end of such call, that same runq_map is
      updated with -1, which is what causes the BUG_ON in csched2_schedule(),
      on CPU 13, to trigger.
      
      So, instead of reverting a2c4e5ab59d "xen: credit2: make the cpu to
      runqueue map per-cpu" (as we don't want to go back to having the huge
      array in struct csched2_private) add a per-cpu scheduler specific data
      structure, like, for instance, Credit1 has already. That (for now) only
      contains one field: the id of the runqueue the CPU is assigned to.
      
      Signed-off-by: Dario Faggioli <dfaggioli@xxxxxxxx>
      Reviewed-by: Juergen Gross <jgross@xxxxxxxx>
      Reviewed-by: George Dunlap <george.dunlap@xxxxxxxxxx>
      master commit: 6e395f477fb854f11de83a951a070d3aacb6dc59
      master date: 2018-09-18 16:50:44 +0100

Revision graph left in 
/home/logs/results/bisect/xen-4.10-testing/build-arm64.xen-build.{dot,ps,png,html,svg}.
----------------------------------------
128610: tolerable ALL FAIL

flight 128610 xen-4.10-testing real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/128610/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 build-arm64                   6 xen-build               fail baseline untested


jobs:
 build-arm64                                                  fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.