[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [linux-4.1 bisection] complete test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm



branch xen-unstable
xenbranch xen-unstable
job test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm
testid debian-hvm-install

Tree: linux 
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux 
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
  Bug introduced:  c5ad33184354260be6d05de57e46a5498692f6d6
  Bug not present: c5bcec6cbcbf520f088dc7939934bbf10c20c5a5
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/97670/


  commit c5ad33184354260be6d05de57e46a5498692f6d6
  Author: Lukasz Odzioba <lukasz.odzioba@xxxxxxxxx>
  Date:   Fri Jun 24 14:50:01 2016 -0700
  
      mm/swap.c: flush lru pvecs on compound page arrival
      
      [ Upstream commit 8f182270dfec432e93fae14f9208a6b9af01009f ]
      
      Currently we can have compound pages held on per cpu pagevecs, which
      leads to a lot of memory unavailable for reclaim when needed.  In the
      systems with hundreads of processors it can be GBs of memory.
      
      On of the way of reproducing the problem is to not call munmap
      explicitly on all mapped regions (i.e.  after receiving SIGTERM).  After
      that some pages (with THP enabled also huge pages) may end up on
      lru_add_pvec, example below.
      
        void main() {
        #pragma omp parallel
        {
        size_t size = 55 * 1000 * 1000; // smaller than  MEM/CPUS
        void *p = mmap(NULL, size, PROT_READ | PROT_WRITE,
                MAP_PRIVATE | MAP_ANONYMOUS , -1, 0);
        if (p != MAP_FAILED)
                memset(p, 0, size);
        //munmap(p, size); // uncomment to make the problem go away
        }
        }
      
      When we run it with THP enabled it will leave significant amount of
      memory on lru_add_pvec.  This memory will be not reclaimed if we hit
      OOM, so when we run above program in a loop:
      
        for i in `seq 100`; do ./a.out; done
      
      many processes (95% in my case) will be killed by OOM.
      
      The primary point of the LRU add cache is to save the zone lru_lock
      contention with a hope that more pages will belong to the same zone and
      so their addition can be batched.  The huge page is already a form of
      batched addition (it will add 512 worth of memory in one go) so skipping
      the batching seems like a safer option when compared to a potential
      excess in the caching which can be quite large and much harder to fix
      because lru_add_drain_all is way to expensive and it is not really clear
      what would be a good moment to call it.
      
      Similarly we can reproduce the problem on lru_deactivate_pvec by adding:
      madvise(p, size, MADV_FREE); after memset.
      
      This patch flushes lru pvecs on compound page arrival making the problem
      less severe - after applying it kill rate of above example drops to 0%,
      due to reducing maximum amount of memory held on pvec from 28MB (with
      THP) to 56kB per CPU.
      
      Suggested-by: Michal Hocko <mhocko@xxxxxxxx>
      Link: 
http://lkml.kernel.org/r/1466180198-18854-1-git-send-email-lukasz.odzioba@xxxxxxxxx
      Signed-off-by: Lukasz Odzioba <lukasz.odzioba@xxxxxxxxx>
      Acked-by: Michal Hocko <mhocko@xxxxxxxx>
      Cc: Kirill Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx>
      Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx>
      Cc: Vladimir Davydov <vdavydov@xxxxxxxxxxxxx>
      Cc: Ming Li <mingli199x@xxxxxx>
      Cc: Minchan Kim <minchan@xxxxxxxxxx>
      Cc: <stable@xxxxxxxxxxxxxxx>
      Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
      Signed-off-by: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
      Signed-off-by: Sasha Levin <sasha.levin@xxxxxxxxxx>


For bisection revision-tuple graph see:
   
http://logs.test-lab.xenproject.org/osstest/results/bisect/linux-4.1/test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm.debian-hvm-install.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step 
--graph-out=/home/logs/results/bisect/linux-4.1/test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm.debian-hvm-install
 --summary-out=tmp/97670.bisection-summary --basis-template=96211 
--blessings=real,real-bisect linux-4.1 
test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm debian-hvm-install
Searching for failure / basis pass:
 97613 fail [host=elbling0] / 96211 [host=fiano1] 96183 [host=chardonnay1] 
96160 [host=italia0] 95848 [host=nocera1] 95818 [host=pinot1] 95591 
[host=fiano0] 95517 [host=chardonnay0] 95455 [host=pinot0] 95408 
[host=huxelrebe0] 94729 [host=chardonnay0] 94034 [host=huxelrebe1] 93220 
[host=chardonnay1] 93111 [host=rimava1] 92143 [host=merlot1] 91350 
[host=chardonnay0] 91189 [host=huxelrebe0] 91008 ok.
Failure / basis pass flights: 97613 / 91008
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: linux 
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 5880876e94699ce010554f483ccf0009997955ca 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
6e20809727261599e8527c456eb078c0e89139a1 
44a072f0de0d57c95c2212bbce02888832b7b74f 
b48be35ac86cd6369124cf06ca3006d086095297
Basis pass 206f91a12c5f69c9b4dfd4e0029043794a046933 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
21f6526d1da331611ac5fe12967549d1a04e149b 
316a862e5534249a6e6d876b4e203342d3fb870e 
a6f2cdb633bf519244a16674031b8034b581ba7f
Generating revisions with ./adhoc-revtuple-generator  
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git#206f91a12c5f69c9b4dfd4e0029043794a046933-5880876e94699ce010554f483ccf0009997955ca
 
git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860
 
git://xenbits.xen.org/qemu-xen-traditional.git#21f6526d1da331611ac5fe12967549d1a04e149b-6e20809727261599e8527c456eb078c0e89139a1
 
git://xenbits.xen.org/qemu-xen.git#316a862e5534249a6e6d876b4e203342d3fb870e-44a072f0de0d57c95c2212bbce02888832b7b74f
 
git://xenbits.xen.org/xen.git#a6f2cdb633bf519244a16674031b8034b581ba7f-b48be35ac86cd6369124cf06ca3006d086095297
Loaded 10934 nodes in revision graph
Searching for test results:
 88639 [host=rimava0]
 88721 [host=pinot0]
 89248 [host=baroque1]
 89382 [host=elbling1]
 90845 [host=nocera1]
 91008 pass 206f91a12c5f69c9b4dfd4e0029043794a046933 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
21f6526d1da331611ac5fe12967549d1a04e149b 
316a862e5534249a6e6d876b4e203342d3fb870e 
a6f2cdb633bf519244a16674031b8034b581ba7f
 91189 [host=huxelrebe0]
 91350 [host=chardonnay0]
 92143 [host=merlot1]
 93111 [host=rimava1]
 93220 [host=chardonnay1]
 94034 [host=huxelrebe1]
 94729 [host=chardonnay0]
 95408 [host=huxelrebe0]
 95455 [host=pinot0]
 95517 [host=chardonnay0]
 95591 [host=fiano0]
 95848 [host=nocera1]
 95818 [host=pinot1]
 96211 [host=fiano1]
 96160 [host=italia0]
 96183 [host=chardonnay1]
 97279 fail irrelevant
 97434 fail irrelevant
 97394 fail irrelevant
 97496 fail 5880876e94699ce010554f483ccf0009997955ca 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
6e20809727261599e8527c456eb078c0e89139a1 
44a072f0de0d57c95c2212bbce02888832b7b74f 
b48be35ac86cd6369124cf06ca3006d086095297
 97558 fail 5880876e94699ce010554f483ccf0009997955ca 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
6e20809727261599e8527c456eb078c0e89139a1 
44a072f0de0d57c95c2212bbce02888832b7b74f 
b48be35ac86cd6369124cf06ca3006d086095297
 97655 fail c5ad33184354260be6d05de57e46a5498692f6d6 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
6e20809727261599e8527c456eb078c0e89139a1 
44a072f0de0d57c95c2212bbce02888832b7b74f 
7da483b0236d8974cc97f81780dcf8e559a63175
 97620 fail eae5f796a5de5ebc33e745126ce232f534fd0d0e 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
6e20809727261599e8527c456eb078c0e89139a1 
44a072f0de0d57c95c2212bbce02888832b7b74f 
7da483b0236d8974cc97f81780dcf8e559a63175
 97601 pass 206f91a12c5f69c9b4dfd4e0029043794a046933 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
21f6526d1da331611ac5fe12967549d1a04e149b 
316a862e5534249a6e6d876b4e203342d3fb870e 
a6f2cdb633bf519244a16674031b8034b581ba7f
 97642 pass eba391c749fe8a47aea9de2e78fadc02434b5417 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
6e20809727261599e8527c456eb078c0e89139a1 
44a072f0de0d57c95c2212bbce02888832b7b74f 
7da483b0236d8974cc97f81780dcf8e559a63175
 97626 pass 95123c0b81d9478b8155fe15093b88f57ef7d0bd 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
6e20809727261599e8527c456eb078c0e89139a1 
44a072f0de0d57c95c2212bbce02888832b7b74f 
36c659348837dd411ad6687a76825dd30dd8a419
 97605 fail 5880876e94699ce010554f483ccf0009997955ca 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
6e20809727261599e8527c456eb078c0e89139a1 
44a072f0de0d57c95c2212bbce02888832b7b74f 
b48be35ac86cd6369124cf06ca3006d086095297
 97630 pass 260c505e55b51645affb70a2c456b350f7e7460a 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
6e20809727261599e8527c456eb078c0e89139a1 
44a072f0de0d57c95c2212bbce02888832b7b74f 
7da483b0236d8974cc97f81780dcf8e559a63175
 97609 pass 54419e3efcd6677e4b0841666e2fc605d2e5df86 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
21f6526d1da331611ac5fe12967549d1a04e149b 
ae69b059498e8a563c6d64c4aa4cb95e53d76680 
75529048f4e81edf4b6af54418976f93a9b90e02
 97616 pass 888172862fa78505c4e4674c205a06586443d83f 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
6e20809727261599e8527c456eb078c0e89139a1 
44a072f0de0d57c95c2212bbce02888832b7b74f 
44e6ba4b3376f78315cd447dc88813ba60a83b32
 97647 pass c5bcec6cbcbf520f088dc7939934bbf10c20c5a5 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
6e20809727261599e8527c456eb078c0e89139a1 
44a072f0de0d57c95c2212bbce02888832b7b74f 
7da483b0236d8974cc97f81780dcf8e559a63175
 97631 pass 32dc059d132c7fb4f45a7aeab70e08d2a47ed90d 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
6e20809727261599e8527c456eb078c0e89139a1 
44a072f0de0d57c95c2212bbce02888832b7b74f 
7da483b0236d8974cc97f81780dcf8e559a63175
 97658 pass c5bcec6cbcbf520f088dc7939934bbf10c20c5a5 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
6e20809727261599e8527c456eb078c0e89139a1 
44a072f0de0d57c95c2212bbce02888832b7b74f 
7da483b0236d8974cc97f81780dcf8e559a63175
 97636 fail cc6fd729b8a04fbb4b88e45209c1241dd89a3fbe 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
6e20809727261599e8527c456eb078c0e89139a1 
44a072f0de0d57c95c2212bbce02888832b7b74f 
7da483b0236d8974cc97f81780dcf8e559a63175
 97666 pass c5bcec6cbcbf520f088dc7939934bbf10c20c5a5 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
6e20809727261599e8527c456eb078c0e89139a1 
44a072f0de0d57c95c2212bbce02888832b7b74f 
7da483b0236d8974cc97f81780dcf8e559a63175
 97613 fail 5880876e94699ce010554f483ccf0009997955ca 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
6e20809727261599e8527c456eb078c0e89139a1 
44a072f0de0d57c95c2212bbce02888832b7b74f 
b48be35ac86cd6369124cf06ca3006d086095297
 97650 fail 683854270f84daa09baffe2b21d64ec88c614fa9 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
6e20809727261599e8527c456eb078c0e89139a1 
44a072f0de0d57c95c2212bbce02888832b7b74f 
7da483b0236d8974cc97f81780dcf8e559a63175
 97670 fail c5ad33184354260be6d05de57e46a5498692f6d6 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
6e20809727261599e8527c456eb078c0e89139a1 
44a072f0de0d57c95c2212bbce02888832b7b74f 
7da483b0236d8974cc97f81780dcf8e559a63175
 97663 fail c5ad33184354260be6d05de57e46a5498692f6d6 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
6e20809727261599e8527c456eb078c0e89139a1 
44a072f0de0d57c95c2212bbce02888832b7b74f 
7da483b0236d8974cc97f81780dcf8e559a63175
Searching for interesting versions
 Result found: flight 91008 (pass), for basis pass
 Result found: flight 97496 (fail), for basis failure
 Repro found: flight 97601 (pass), for basis pass
 Repro found: flight 97605 (fail), for basis failure
 0 revisions at c5bcec6cbcbf520f088dc7939934bbf10c20c5a5 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
6e20809727261599e8527c456eb078c0e89139a1 
44a072f0de0d57c95c2212bbce02888832b7b74f 
7da483b0236d8974cc97f81780dcf8e559a63175
No revisions left to test, checking graph state.
 Result found: flight 97647 (pass), for last pass
 Result found: flight 97655 (fail), for first failure
 Repro found: flight 97658 (pass), for last pass
 Repro found: flight 97663 (fail), for first failure
 Repro found: flight 97666 (pass), for last pass
 Repro found: flight 97670 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux 
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
  Bug introduced:  c5ad33184354260be6d05de57e46a5498692f6d6
  Bug not present: c5bcec6cbcbf520f088dc7939934bbf10c20c5a5
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/97670/


  commit c5ad33184354260be6d05de57e46a5498692f6d6
  Author: Lukasz Odzioba <lukasz.odzioba@xxxxxxxxx>
  Date:   Fri Jun 24 14:50:01 2016 -0700
  
      mm/swap.c: flush lru pvecs on compound page arrival
      
      [ Upstream commit 8f182270dfec432e93fae14f9208a6b9af01009f ]
      
      Currently we can have compound pages held on per cpu pagevecs, which
      leads to a lot of memory unavailable for reclaim when needed.  In the
      systems with hundreads of processors it can be GBs of memory.
      
      On of the way of reproducing the problem is to not call munmap
      explicitly on all mapped regions (i.e.  after receiving SIGTERM).  After
      that some pages (with THP enabled also huge pages) may end up on
      lru_add_pvec, example below.
      
        void main() {
        #pragma omp parallel
        {
        size_t size = 55 * 1000 * 1000; // smaller than  MEM/CPUS
        void *p = mmap(NULL, size, PROT_READ | PROT_WRITE,
                MAP_PRIVATE | MAP_ANONYMOUS , -1, 0);
        if (p != MAP_FAILED)
                memset(p, 0, size);
        //munmap(p, size); // uncomment to make the problem go away
        }
        }
      
      When we run it with THP enabled it will leave significant amount of
      memory on lru_add_pvec.  This memory will be not reclaimed if we hit
      OOM, so when we run above program in a loop:
      
        for i in `seq 100`; do ./a.out; done
      
      many processes (95% in my case) will be killed by OOM.
      
      The primary point of the LRU add cache is to save the zone lru_lock
      contention with a hope that more pages will belong to the same zone and
      so their addition can be batched.  The huge page is already a form of
      batched addition (it will add 512 worth of memory in one go) so skipping
      the batching seems like a safer option when compared to a potential
      excess in the caching which can be quite large and much harder to fix
      because lru_add_drain_all is way to expensive and it is not really clear
      what would be a good moment to call it.
      
      Similarly we can reproduce the problem on lru_deactivate_pvec by adding:
      madvise(p, size, MADV_FREE); after memset.
      
      This patch flushes lru pvecs on compound page arrival making the problem
      less severe - after applying it kill rate of above example drops to 0%,
      due to reducing maximum amount of memory held on pvec from 28MB (with
      THP) to 56kB per CPU.
      
      Suggested-by: Michal Hocko <mhocko@xxxxxxxx>
      Link: 
http://lkml.kernel.org/r/1466180198-18854-1-git-send-email-lukasz.odzioba@xxxxxxxxx
      Signed-off-by: Lukasz Odzioba <lukasz.odzioba@xxxxxxxxx>
      Acked-by: Michal Hocko <mhocko@xxxxxxxx>
      Cc: Kirill Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx>
      Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx>
      Cc: Vladimir Davydov <vdavydov@xxxxxxxxxxxxx>
      Cc: Ming Li <mingli199x@xxxxxx>
      Cc: Minchan Kim <minchan@xxxxxxxxxx>
      Cc: <stable@xxxxxxxxxxxxxxx>
      Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
      Signed-off-by: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
      Signed-off-by: Sasha Levin <sasha.levin@xxxxxxxxxx>

dot: graph is too large for cairo-renderer bitmaps. Scaling by 0.344394 to fit
pnmtopng: 42 colors found
Revision graph left in 
/home/logs/results/bisect/linux-4.1/test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm.debian-hvm-install.{dot,ps,png,html,svg}.
----------------------------------------
97670: tolerable ALL FAIL

flight 97670 linux-4.1 real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/97670/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 9 debian-hvm-install fail 
baseline untested


jobs:
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.