[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [xen-unstable bisection] complete test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm



branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm
testid xen-boot

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  c5b9805bc1f793177779ae342c65fcc201a15a47
  Bug not present: b199c44afa3a0d18d0e968e78a590eb9e69e20ad
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/106107/


  commit c5b9805bc1f793177779ae342c65fcc201a15a47
  Author: Daniel Kiper <daniel.kiper@xxxxxxxxxx>
  Date:   Wed Feb 22 14:38:06 2017 +0100
  
      efi: create new early memory allocator
      
      There is a problem with place_string() which is used as early memory
      allocator. It gets memory chunks starting from start symbol and goes
      down. Sadly this does not work when Xen is loaded using multiboot2
      protocol because then the start lives on 1 MiB address and we should
      not allocate a memory from below of it. So, I tried to use mem_lower
      address calculated by GRUB2. However, this solution works only on some
      machines. There are machines in the wild (e.g. Dell PowerEdge R820)
      which uses first ~640 KiB for boot services code or data... :-(((
      Hence, we need new memory allocator for Xen EFI boot code which is
      quite simple and generic and could be used by place_string() and
      efi_arch_allocate_mmap_buffer(). I think about following solutions:
      
      1) We could use native EFI allocation functions (e.g. AllocatePool()
         or AllocatePages()) to get memory chunk. However, later (somewhere
         in __start_xen()) we must copy its contents to safe place or reserve
         it in e820 memory map and map it in Xen virtual address space. This
         means that the code referring to Xen command line, loaded modules and
         EFI memory map, mostly in __start_xen(), will be further complicated
         and diverge from legacy BIOS cases. Additionally, both former things
         have to be placed below 4 GiB because their addresses are stored in
         multiboot_info_t structure which has 32-bit relevant members.
      
      2) We may allocate memory area statically somewhere in Xen code which
         could be used as memory pool for early dynamic allocations. Looks
         quite simple. Additionally, it would not depend on EFI at all and
         could be used on legacy BIOS platforms if we need it. However, we
         must carefully choose size of this pool. We do not want increase Xen
         binary size too much and waste too much memory but also we must fit
         at least memory map on x86 EFI platforms. As I saw on small machine,
         e.g. IBM System x3550 M2 with 8 GiB RAM, memory map may contain more
         than 200 entries. Every entry on x86-64 platform is 40 bytes in size.
         So, it means that we need more than 8 KiB for EFI memory map only.
         Additionally, if we use this memory pool for Xen and modules command
         line storage (it would be used when xen.efi is executed as EFI 
application)
         then we should add, I think, about 1 KiB. In this case, to be on safe
         side, we should assume at least 64 KiB pool for early memory 
allocations.
         Which is about 4 times of our earlier calculations. However, during
         discussion on Xen-devel Jan Beulich suggested that just in case we 
should
         use 1 MiB memory pool like it is in original place_string() 
implementation.
         So, let's use 1 MiB as it was proposed. If we think that we should not
         waste unallocated memory in the pool on running system then we can mark
         this region as __initdata and move all required data to dynamically
         allocated places somewhere in __start_xen().
      
      2a) We could put memory pool into .bss.page_aligned section. Then allocate
          memory chunks starting from the lowest address. After init phase we 
can
          free unused portion of the memory pool as in case of .init.text or 
.init.data
          sections. This way we do not need to allocate any space in image file 
and
          freeing of unused area in the memory pool is very simple.
      
      Now #2a solution is implemented because it is quite simple and requires
      limited number of changes, especially in __start_xen().
      
      New allocator is quite generic and can be used on ARM platforms too.
      Though it is not enabled on ARM yet due to lack of some prereq.
      List of them is placed before ebmalloc code.
      
      Signed-off-by: Daniel Kiper <daniel.kiper@xxxxxxxxxx>
      Acked-by: Jan Beulich <jbeulich@xxxxxxxx>
      Acked-by: Julien Grall <julien.grall@xxxxxxx>
      Reviewed-by: Doug Goldstein <cardoe@xxxxxxxxxx>
      Tested-by: Doug Goldstein <cardoe@xxxxxxxxxx>


For bisection revision-tuple graph see:
   
http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable/test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm.xen-boot.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step 
--graph-out=/home/logs/results/bisect/xen-unstable/test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm.xen-boot
 --summary-out=tmp/106107.bisection-summary --basis-template=105933 
--blessings=real,real-bisect xen-unstable 
test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm xen-boot
Searching for failure / basis pass:
 106081 fail [host=elbling0] / 105966 [host=merlot0] 105946 [host=pinot1] 
105933 [host=italia0] 105919 [host=italia1] 105900 [host=baroque1] 105896 
[host=rimava1] 105873 [host=huxelrebe0] 105861 [host=rimava0] 105840 
[host=chardonnay0] 105821 [host=huxelrebe1] 105804 [host=elbling1] 105790 
[host=fiano0] 105784 [host=pinot0] 105766 [host=fiano1] 105756 [host=nocera1] 
105742 [host=merlot1] 105728 [host=chardonnay1] 105707 [host=nobling1] 105669 
[host=baroque0] 105659 [host=nocera0] 105640 ok.
Failure / basis pass flights: 106081 / 105640
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest b65f2f457c49b2cfd7967c34b7a0b04c25587f13 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
8b4834ee1202852ed83a9fc61268c65fb6961ea7 
57e8fbb2f702001a18bd81e9fe31b26d94247ac9 
cf5e1a74b9687be3d146e59ab10c26be6da9d0d4
Basis pass b65f2f457c49b2cfd7967c34b7a0b04c25587f13 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
b669e922b37b8957248798a5eb7aa96a666cd3fe 
5cd2e1739763915e6b4c247eef71f948dc808bd5 
93a3fbaf16f4b66c7866f42c2699c7af636f2933
Generating revisions with ./adhoc-revtuple-generator  
git://xenbits.xen.org/linux-pvops.git#b65f2f457c49b2cfd7967c34b7a0b04c25587f13-b65f2f457c49b2cfd7967c34b7a0b04c25587f13
 
git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860
 
git://xenbits.xen.org/qemu-xen-traditional.git#b669e922b37b8957248798a5eb7aa96a666cd3fe-8b4834ee1202852ed83a9fc61268c65fb6961ea7
 
git://xenbits.xen.org/qemu-xen.git#5cd2e1739763915e6b4c247eef71f948dc808bd5-57e8fbb2f702001a18bd81e9fe31b26d94247ac9
 
git://xenbits.xen.org/xen.git#93a3fbaf16f4b66c7866f42c2699c7af636f2933-cf5e1a74b9687be3d146e59ab10c26be6da9d0d4
From git://cache:9419/git://xenbits.xen.org/qemu-xen
   796b288..63f495b  upstream-tested -> origin/upstream-tested
Loaded 7004 nodes in revision graph
Searching for test results:
 105640 pass b65f2f457c49b2cfd7967c34b7a0b04c25587f13 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
b669e922b37b8957248798a5eb7aa96a666cd3fe 
5cd2e1739763915e6b4c247eef71f948dc808bd5 
93a3fbaf16f4b66c7866f42c2699c7af636f2933
 105659 [host=nocera0]
 105669 [host=baroque0]
 105707 [host=nobling1]
 105728 [host=chardonnay1]
 105790 [host=fiano0]
 105756 [host=nocera1]
 105742 [host=merlot1]
 105784 [host=pinot0]
 105766 [host=fiano1]
 105804 [host=elbling1]
 105821 [host=huxelrebe1]
 105840 [host=chardonnay0]
 105896 [host=rimava1]
 105919 [host=italia1]
 105861 [host=rimava0]
 105873 [host=huxelrebe0]
 105900 [host=baroque1]
 105933 [host=italia0]
 105946 [host=pinot1]
 105966 [host=merlot0]
 105994 fail b65f2f457c49b2cfd7967c34b7a0b04c25587f13 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
b669e922b37b8957248798a5eb7aa96a666cd3fe 
57e8fbb2f702001a18bd81e9fe31b26d94247ac9 
80a7d04f532ddc3500acd7988917708a536ae15f
 106081 fail b65f2f457c49b2cfd7967c34b7a0b04c25587f13 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
8b4834ee1202852ed83a9fc61268c65fb6961ea7 
57e8fbb2f702001a18bd81e9fe31b26d94247ac9 
cf5e1a74b9687be3d146e59ab10c26be6da9d0d4
 106100 fail b65f2f457c49b2cfd7967c34b7a0b04c25587f13 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
b669e922b37b8957248798a5eb7aa96a666cd3fe 
57e8fbb2f702001a18bd81e9fe31b26d94247ac9 
c5b9805bc1f793177779ae342c65fcc201a15a47
 106103 fail b65f2f457c49b2cfd7967c34b7a0b04c25587f13 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
8b4834ee1202852ed83a9fc61268c65fb6961ea7 
57e8fbb2f702001a18bd81e9fe31b26d94247ac9 
cf5e1a74b9687be3d146e59ab10c26be6da9d0d4
 106082 pass b65f2f457c49b2cfd7967c34b7a0b04c25587f13 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
b669e922b37b8957248798a5eb7aa96a666cd3fe 
5cd2e1739763915e6b4c247eef71f948dc808bd5 
93a3fbaf16f4b66c7866f42c2699c7af636f2933
 106104 pass b65f2f457c49b2cfd7967c34b7a0b04c25587f13 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
b669e922b37b8957248798a5eb7aa96a666cd3fe 
57e8fbb2f702001a18bd81e9fe31b26d94247ac9 
b199c44afa3a0d18d0e968e78a590eb9e69e20ad
 106085 fail b65f2f457c49b2cfd7967c34b7a0b04c25587f13 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
b669e922b37b8957248798a5eb7aa96a666cd3fe 
57e8fbb2f702001a18bd81e9fe31b26d94247ac9 
80a7d04f532ddc3500acd7988917708a536ae15f
 106088 pass b65f2f457c49b2cfd7967c34b7a0b04c25587f13 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
b669e922b37b8957248798a5eb7aa96a666cd3fe 
728e90b41d46c1c1c210ac496204efd51936db75 
d0d0bc486c46fbf11b5e79d8868d32ce14eec2a7
 106091 pass b65f2f457c49b2cfd7967c34b7a0b04c25587f13 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
b669e922b37b8957248798a5eb7aa96a666cd3fe 
e88462aaa2f19e1238e77c1bcebbab7ef5380d7a 
fe416bf9957669e34e93a614970546b3a002f0e8
 106107 fail b65f2f457c49b2cfd7967c34b7a0b04c25587f13 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
b669e922b37b8957248798a5eb7aa96a666cd3fe 
57e8fbb2f702001a18bd81e9fe31b26d94247ac9 
c5b9805bc1f793177779ae342c65fcc201a15a47
 106092 pass b65f2f457c49b2cfd7967c34b7a0b04c25587f13 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
b669e922b37b8957248798a5eb7aa96a666cd3fe 
08c008de9c7d3ac71f71c87cc04a47819ca228dc 
2f1add6e1c8789d979daaafa3d80ddc1bc375783
 106093 pass b65f2f457c49b2cfd7967c34b7a0b04c25587f13 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
b669e922b37b8957248798a5eb7aa96a666cd3fe 
57e8fbb2f702001a18bd81e9fe31b26d94247ac9 
435ae6afed876e47a8a6b12364ff1ec7a180b24f
 106096 pass b65f2f457c49b2cfd7967c34b7a0b04c25587f13 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
b669e922b37b8957248798a5eb7aa96a666cd3fe 
57e8fbb2f702001a18bd81e9fe31b26d94247ac9 
b199c44afa3a0d18d0e968e78a590eb9e69e20ad
 106097 fail b65f2f457c49b2cfd7967c34b7a0b04c25587f13 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
b669e922b37b8957248798a5eb7aa96a666cd3fe 
57e8fbb2f702001a18bd81e9fe31b26d94247ac9 
9180f53655245328f06c5051d3298376cb5771b1
 106098 fail b65f2f457c49b2cfd7967c34b7a0b04c25587f13 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
b669e922b37b8957248798a5eb7aa96a666cd3fe 
57e8fbb2f702001a18bd81e9fe31b26d94247ac9 
c5b9805bc1f793177779ae342c65fcc201a15a47
 106099 pass b65f2f457c49b2cfd7967c34b7a0b04c25587f13 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
b669e922b37b8957248798a5eb7aa96a666cd3fe 
57e8fbb2f702001a18bd81e9fe31b26d94247ac9 
b199c44afa3a0d18d0e968e78a590eb9e69e20ad
Searching for interesting versions
 Result found: flight 105640 (pass), for basis pass
 Result found: flight 106081 (fail), for basis failure
 Repro found: flight 106082 (pass), for basis pass
 Repro found: flight 106103 (fail), for basis failure
 0 revisions at b65f2f457c49b2cfd7967c34b7a0b04c25587f13 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
b669e922b37b8957248798a5eb7aa96a666cd3fe 
57e8fbb2f702001a18bd81e9fe31b26d94247ac9 
b199c44afa3a0d18d0e968e78a590eb9e69e20ad
No revisions left to test, checking graph state.
 Result found: flight 106096 (pass), for last pass
 Result found: flight 106098 (fail), for first failure
 Repro found: flight 106099 (pass), for last pass
 Repro found: flight 106100 (fail), for first failure
 Repro found: flight 106104 (pass), for last pass
 Repro found: flight 106107 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  c5b9805bc1f793177779ae342c65fcc201a15a47
  Bug not present: b199c44afa3a0d18d0e968e78a590eb9e69e20ad
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/106107/


  commit c5b9805bc1f793177779ae342c65fcc201a15a47
  Author: Daniel Kiper <daniel.kiper@xxxxxxxxxx>
  Date:   Wed Feb 22 14:38:06 2017 +0100
  
      efi: create new early memory allocator
      
      There is a problem with place_string() which is used as early memory
      allocator. It gets memory chunks starting from start symbol and goes
      down. Sadly this does not work when Xen is loaded using multiboot2
      protocol because then the start lives on 1 MiB address and we should
      not allocate a memory from below of it. So, I tried to use mem_lower
      address calculated by GRUB2. However, this solution works only on some
      machines. There are machines in the wild (e.g. Dell PowerEdge R820)
      which uses first ~640 KiB for boot services code or data... :-(((
      Hence, we need new memory allocator for Xen EFI boot code which is
      quite simple and generic and could be used by place_string() and
      efi_arch_allocate_mmap_buffer(). I think about following solutions:
      
      1) We could use native EFI allocation functions (e.g. AllocatePool()
         or AllocatePages()) to get memory chunk. However, later (somewhere
         in __start_xen()) we must copy its contents to safe place or reserve
         it in e820 memory map and map it in Xen virtual address space. This
         means that the code referring to Xen command line, loaded modules and
         EFI memory map, mostly in __start_xen(), will be further complicated
         and diverge from legacy BIOS cases. Additionally, both former things
         have to be placed below 4 GiB because their addresses are stored in
         multiboot_info_t structure which has 32-bit relevant members.
      
      2) We may allocate memory area statically somewhere in Xen code which
         could be used as memory pool for early dynamic allocations. Looks
         quite simple. Additionally, it would not depend on EFI at all and
         could be used on legacy BIOS platforms if we need it. However, we
         must carefully choose size of this pool. We do not want increase Xen
         binary size too much and waste too much memory but also we must fit
         at least memory map on x86 EFI platforms. As I saw on small machine,
         e.g. IBM System x3550 M2 with 8 GiB RAM, memory map may contain more
         than 200 entries. Every entry on x86-64 platform is 40 bytes in size.
         So, it means that we need more than 8 KiB for EFI memory map only.
         Additionally, if we use this memory pool for Xen and modules command
         line storage (it would be used when xen.efi is executed as EFI 
application)
         then we should add, I think, about 1 KiB. In this case, to be on safe
         side, we should assume at least 64 KiB pool for early memory 
allocations.
         Which is about 4 times of our earlier calculations. However, during
         discussion on Xen-devel Jan Beulich suggested that just in case we 
should
         use 1 MiB memory pool like it is in original place_string() 
implementation.
         So, let's use 1 MiB as it was proposed. If we think that we should not
         waste unallocated memory in the pool on running system then we can mark
         this region as __initdata and move all required data to dynamically
         allocated places somewhere in __start_xen().
      
      2a) We could put memory pool into .bss.page_aligned section. Then allocate
          memory chunks starting from the lowest address. After init phase we 
can
          free unused portion of the memory pool as in case of .init.text or 
.init.data
          sections. This way we do not need to allocate any space in image file 
and
          freeing of unused area in the memory pool is very simple.
      
      Now #2a solution is implemented because it is quite simple and requires
      limited number of changes, especially in __start_xen().
      
      New allocator is quite generic and can be used on ARM platforms too.
      Though it is not enabled on ARM yet due to lack of some prereq.
      List of them is placed before ebmalloc code.
      
      Signed-off-by: Daniel Kiper <daniel.kiper@xxxxxxxxxx>
      Acked-by: Jan Beulich <jbeulich@xxxxxxxx>
      Acked-by: Julien Grall <julien.grall@xxxxxxx>
      Reviewed-by: Doug Goldstein <cardoe@xxxxxxxxxx>
      Tested-by: Doug Goldstein <cardoe@xxxxxxxxxx>

pnmtopng: 209 colors found
Revision graph left in 
/home/logs/results/bisect/xen-unstable/test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm.xen-boot.{dot,ps,png,html,svg}.
----------------------------------------
106107: tolerable ALL FAIL

flight 106107 xen-unstable real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/106107/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 6 xen-boot fail baseline 
untested


jobs:
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.