[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [xen-4.10-testing test] 117613: regressions - trouble: broken/fail/pass
flight 117613 xen-4.10-testing real [real] http://logs.test-lab.xenproject.org/osstest/logs/117613/ Regressions :-( Tests which did not succeed and are blocking, including tests which could not be run: test-xtf-amd64-amd64-5 <job status> broken test-amd64-amd64-xl-qemut-win10-i386 <job status> broken test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm <job status> broken test-amd64-amd64-xl-qemut-win10-i386 4 host-install(4) broken REGR. vs. 117130 test-xtf-amd64-amd64-5 4 host-install(4) broken REGR. vs. 117130 build-amd64-pvops <job status> broken in 117549 test-amd64-i386-qemut-rhel6hvm-intel <job status> broken in 117549 build-amd64-xtf <job status> broken in 117549 build-amd64-xtf 4 host-install(4) broken in 117549 REGR. vs. 117130 build-amd64-pvops 4 host-install(4) broken in 117549 REGR. vs. 117130 test-amd64-amd64-xl-qemut-ws16-amd64 16 guest-localmigrate/x10 fail REGR. vs. 117130 Tests which are failing intermittently (not blocking): test-amd64-i386-qemut-rhel6hvm-intel 4 host-install(4) broken in 117549 pass in 117613 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm 4 host-install(4) broken pass in 117549 Tests which did not succeed, but are not blocking: test-xtf-amd64-amd64-5 1 build-check(1) blocked in 117549 n/a test-amd64-amd64-qemuu-nested-intel 1 build-check(1) blocked in 117549 n/a test-amd64-amd64-xl-qemuu-ws16-amd64 1 build-check(1) blocked in 117549 n/a test-amd64-amd64-xl-qemut-ws16-amd64 1 build-check(1) blocked in 117549 n/a test-amd64-amd64-xl-rtds 1 build-check(1) blocked in 117549 n/a test-amd64-amd64-pair 1 build-check(1) blocked in 117549 n/a test-amd64-amd64-xl-qemut-debianhvm-amd64 1 build-check(1) blocked in 117549 n/a test-amd64-amd64-xl-multivcpu 1 build-check(1) blocked in 117549 n/a test-amd64-amd64-xl-qemuu-debianhvm-amd64 1 build-check(1) blocked in 117549 n/a test-amd64-amd64-amd64-pvgrub 1 build-check(1) blocked in 117549 n/a test-amd64-amd64-libvirt 1 build-check(1) blocked in 117549 n/a test-xtf-amd64-amd64-2 1 build-check(1) blocked in 117549 n/a test-amd64-amd64-xl-pvhv2-intel 1 build-check(1) blocked in 117549 n/a test-xtf-amd64-amd64-3 1 build-check(1) blocked in 117549 n/a test-amd64-amd64-libvirt-pair 1 build-check(1) blocked in 117549 n/a test-amd64-amd64-xl-qemuu-ovmf-amd64 1 build-check(1) blocked in 117549 n/a test-amd64-amd64-i386-pvgrub 1 build-check(1) blocked in 117549 n/a test-amd64-amd64-xl-qemuu-win7-amd64 1 build-check(1) blocked in 117549 n/a test-amd64-amd64-xl-xsm 1 build-check(1) blocked in 117549 n/a test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked in 117549 n/a test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm 1 build-check(1) blocked in 117549 n/a test-amd64-amd64-xl 1 build-check(1) blocked in 117549 n/a test-amd64-amd64-xl-qemut-win7-amd64 1 build-check(1) blocked in 117549 n/a test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked in 117549 n/a test-amd64-amd64-migrupgrade 1 build-check(1) blocked in 117549 n/a test-amd64-amd64-rumprun-amd64 1 build-check(1) blocked in 117549 n/a test-xtf-amd64-amd64-1 1 build-check(1) blocked in 117549 n/a test-xtf-amd64-amd64-4 1 build-check(1) blocked in 117549 n/a test-amd64-amd64-xl-qcow2 1 build-check(1) blocked in 117549 n/a test-amd64-amd64-libvirt-vhd 1 build-check(1) blocked in 117549 n/a test-amd64-amd64-pygrub 1 build-check(1) blocked in 117549 n/a test-amd64-amd64-qemuu-nested-amd 1 build-check(1) blocked in 117549 n/a test-amd64-amd64-libvirt-xsm 1 build-check(1) blocked in 117549 n/a test-amd64-amd64-xl-qemut-win10-i386 1 build-check(1) blocked in 117549 n/a test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked in 117549 n/a test-amd64-amd64-xl-pvhv2-amd 1 build-check(1) blocked in 117549 n/a test-amd64-amd64-xl-credit2 1 build-check(1) blocked in 117549 n/a test-amd64-amd64-xl-qemuu-win10-i386 1 build-check(1) blocked in 117549 n/a test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass test-amd64-amd64-xl-pvhv2-amd 12 guest-start fail never pass test-amd64-i386-libvirt 13 migrate-support-check fail never pass test-amd64-amd64-libvirt-xsm 13 migrate-support-check fail never pass test-amd64-amd64-libvirt 13 migrate-support-check fail never pass test-amd64-i386-libvirt-xsm 13 migrate-support-check fail never pass test-arm64-arm64-xl-xsm 13 migrate-support-check fail never pass test-arm64-arm64-xl-xsm 14 saverestore-support-check fail never pass test-arm64-arm64-xl 13 migrate-support-check fail never pass test-arm64-arm64-xl-credit2 13 migrate-support-check fail never pass test-arm64-arm64-xl-credit2 14 saverestore-support-check fail never pass test-arm64-arm64-xl 14 saverestore-support-check fail never pass test-arm64-arm64-libvirt-xsm 13 migrate-support-check fail never pass test-arm64-arm64-libvirt-xsm 14 saverestore-support-check fail never pass test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass test-amd64-amd64-libvirt-vhd 12 migrate-support-check fail never pass test-armhf-armhf-libvirt 13 migrate-support-check fail never pass test-armhf-armhf-libvirt 14 saverestore-support-check fail never pass test-armhf-armhf-xl 13 migrate-support-check fail never pass test-armhf-armhf-xl 14 saverestore-support-check fail never pass test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2 fail never pass test-armhf-armhf-xl-arndale 13 migrate-support-check fail never pass test-armhf-armhf-xl-arndale 14 saverestore-support-check fail never pass test-armhf-armhf-libvirt-xsm 13 migrate-support-check fail never pass test-armhf-armhf-xl-multivcpu 13 migrate-support-check fail never pass test-armhf-armhf-libvirt-xsm 14 saverestore-support-check fail never pass test-armhf-armhf-xl-multivcpu 14 saverestore-support-check fail never pass test-armhf-armhf-xl-rtds 13 migrate-support-check fail never pass test-armhf-armhf-xl-rtds 14 saverestore-support-check fail never pass test-armhf-armhf-xl-credit2 13 migrate-support-check fail never pass test-armhf-armhf-xl-credit2 14 saverestore-support-check fail never pass test-armhf-armhf-xl-cubietruck 13 migrate-support-check fail never pass test-armhf-armhf-xl-cubietruck 14 saverestore-support-check fail never pass test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop fail never pass test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop fail never pass test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail never pass test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail never pass test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop fail never pass test-armhf-armhf-libvirt-raw 12 migrate-support-check fail never pass test-armhf-armhf-libvirt-raw 13 saverestore-support-check fail never pass test-armhf-armhf-xl-vhd 12 migrate-support-check fail never pass test-armhf-armhf-xl-vhd 13 saverestore-support-check fail never pass test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop fail never pass test-armhf-armhf-xl-xsm 13 migrate-support-check fail never pass test-armhf-armhf-xl-xsm 14 saverestore-support-check fail never pass test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail never pass test-amd64-amd64-xl-qemuu-win10-i386 10 windows-install fail never pass test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass version targeted for testing: xen 9dc5eda576bafca47abc7202f075f28d6250bf4d baseline version: xen 44ce23c0d811c08bb559c46a171b234c3ff714a2 Last test of basis 117130 2017-12-14 07:54:15 Z 21 days Testing same since 117522 2018-01-02 16:48:28 Z 2 days 3 attempts ------------------------------------------------------------ People who touched revisions under test: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Daniel Kiper <daniel.kiper@xxxxxxxxxx> Ingo Molnar <mingo@xxxxxxxxxx> Jan Beulich <jbeulich@xxxxxxxx> Kevin Tian <kevin.tian@xxxxxxxxx> Sergey Dyasli <sergey.dyasli@xxxxxxxxxx> Stefano Stabellini <sstabellini@xxxxxxxxxx> Thomas Gleixner <tglx@xxxxxxxxxxxxx> Tom Lendacky <thomas.lendacky@xxxxxxx> jobs: build-amd64-xsm pass build-arm64-xsm pass build-armhf-xsm pass build-i386-xsm pass build-amd64-xtf pass build-amd64 pass build-arm64 pass build-armhf pass build-i386 pass build-amd64-libvirt pass build-arm64-libvirt pass build-armhf-libvirt pass build-i386-libvirt pass build-amd64-prev pass build-i386-prev pass build-amd64-pvops pass build-arm64-pvops pass build-armhf-pvops pass build-i386-pvops pass build-amd64-rumprun pass build-i386-rumprun pass test-xtf-amd64-amd64-1 pass test-xtf-amd64-amd64-2 pass test-xtf-amd64-amd64-3 pass test-xtf-amd64-amd64-4 pass test-xtf-amd64-amd64-5 broken test-amd64-amd64-xl pass test-arm64-arm64-xl pass test-armhf-armhf-xl pass test-amd64-i386-xl pass test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm pass test-amd64-i386-xl-qemut-debianhvm-amd64-xsm pass test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm pass test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm pass test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm pass test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm broken test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm pass test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm pass test-amd64-amd64-libvirt-xsm pass test-arm64-arm64-libvirt-xsm pass test-armhf-armhf-libvirt-xsm pass test-amd64-i386-libvirt-xsm pass test-amd64-amd64-xl-xsm pass test-arm64-arm64-xl-xsm pass test-armhf-armhf-xl-xsm pass test-amd64-i386-xl-xsm pass test-amd64-amd64-qemuu-nested-amd fail test-amd64-amd64-xl-pvhv2-amd fail test-amd64-i386-qemut-rhel6hvm-amd pass test-amd64-i386-qemuu-rhel6hvm-amd pass test-amd64-amd64-xl-qemut-debianhvm-amd64 pass test-amd64-i386-xl-qemut-debianhvm-amd64 pass test-amd64-amd64-xl-qemuu-debianhvm-amd64 pass test-amd64-i386-xl-qemuu-debianhvm-amd64 pass test-amd64-i386-freebsd10-amd64 pass test-amd64-amd64-xl-qemuu-ovmf-amd64 pass test-amd64-i386-xl-qemuu-ovmf-amd64 pass test-amd64-amd64-rumprun-amd64 pass test-amd64-amd64-xl-qemut-win7-amd64 fail test-amd64-i386-xl-qemut-win7-amd64 fail test-amd64-amd64-xl-qemuu-win7-amd64 fail test-amd64-i386-xl-qemuu-win7-amd64 fail test-amd64-amd64-xl-qemut-ws16-amd64 fail test-amd64-i386-xl-qemut-ws16-amd64 fail test-amd64-amd64-xl-qemuu-ws16-amd64 fail test-amd64-i386-xl-qemuu-ws16-amd64 fail test-armhf-armhf-xl-arndale pass test-amd64-amd64-xl-credit2 pass test-arm64-arm64-xl-credit2 pass test-armhf-armhf-xl-credit2 pass test-armhf-armhf-xl-cubietruck pass test-amd64-i386-freebsd10-i386 pass test-amd64-i386-rumprun-i386 pass test-amd64-amd64-xl-qemut-win10-i386 broken test-amd64-i386-xl-qemut-win10-i386 fail test-amd64-amd64-xl-qemuu-win10-i386 fail test-amd64-i386-xl-qemuu-win10-i386 fail test-amd64-amd64-qemuu-nested-intel pass test-amd64-amd64-xl-pvhv2-intel fail test-amd64-i386-qemut-rhel6hvm-intel pass test-amd64-i386-qemuu-rhel6hvm-intel pass test-amd64-amd64-libvirt pass test-armhf-armhf-libvirt pass test-amd64-i386-libvirt pass test-amd64-amd64-migrupgrade pass test-amd64-i386-migrupgrade pass test-amd64-amd64-xl-multivcpu pass test-armhf-armhf-xl-multivcpu pass test-amd64-amd64-pair pass test-amd64-i386-pair pass test-amd64-amd64-libvirt-pair pass test-amd64-i386-libvirt-pair pass test-amd64-amd64-amd64-pvgrub pass test-amd64-amd64-i386-pvgrub pass test-amd64-amd64-pygrub pass test-amd64-amd64-xl-qcow2 pass test-armhf-armhf-libvirt-raw pass test-amd64-i386-xl-raw pass test-amd64-amd64-xl-rtds pass test-armhf-armhf-xl-rtds pass test-amd64-amd64-libvirt-vhd pass test-armhf-armhf-xl-vhd pass ------------------------------------------------------------ sg-report-flight on osstest.test-lab.xenproject.org logs: /home/logs/logs images: /home/logs/images Logs, config files, etc. are available at http://logs.test-lab.xenproject.org/osstest/logs Explanation of these reports, and of osstest in general, is at http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master Test harness code can be found at http://xenbits.xen.org/gitweb?p=osstest.git;a=summary broken-job test-xtf-amd64-amd64-5 broken broken-job test-amd64-amd64-xl-qemut-win10-i386 broken broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm broken broken-step test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm host-install(4) broken-step test-amd64-amd64-xl-qemut-win10-i386 host-install(4) broken-step test-xtf-amd64-amd64-5 host-install(4) broken-job build-amd64-pvops broken broken-job test-amd64-i386-qemut-rhel6hvm-intel broken broken-job build-amd64-xtf broken Not pushing. ------------------------------------------------------------ commit 9dc5eda576bafca47abc7202f075f28d6250bf4d Author: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Date: Wed Dec 20 15:45:32 2017 +0100 x86/vmx: Don't use hvm_inject_hw_exception() in long_mode_do_msr_write() Since c/s 49de10f3c1718 "x86/hvm: Don't raise #GP behind the emulators back for MSR accesses", returning X86EMUL_EXCEPTION has pushed the exception generation to the top of the call tree. Using hvm_inject_hw_exception() and returning X86EMUL_EXCEPTION causes a double #GP injection, which combines to #DF. Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Acked-by: Kevin Tian <kevin.tian@xxxxxxxxx> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx> master commit: 896ee3980e72866b602e743396751384de301fb0 master date: 2017-12-14 18:05:45 +0000 commit 135b67e9bd5281084efe9fb1d3604915dac07ce8 Author: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Date: Wed Dec 20 15:44:57 2017 +0100 xen/efi: Fix build with clang-5.0 The clang-5.0 build is reliably failing with: Error: size of boot.o:.text is 0x01 which is because efi_arch_flush_dcache_area() exists as a single ret instruction. Mark it as __init like everything else in the files. Spotted by Travis. Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Reviewed-by: Stefano Stabellini <sstabellini@xxxxxxxxxx> Acked-by: Jan Beulich <jbeulich@xxxxxxxx> master commit: c4f6ad4c5fd25cb0ccc0cdbe711db97e097f0407 master date: 2017-12-14 10:59:26 +0000 commit 682a9d8d37f1141b199bc3aadf8d5d276b22baf9 Author: Jan Beulich <jbeulich@xxxxxxxx> Date: Wed Dec 20 15:44:20 2017 +0100 gnttab: improve GNTTABOP_cache_flush locking Dropping the lock before returning from grant_map_exists() means handing possibly stale information back to the caller. Return back the pointer to the active entry instead, for the caller to release the lock once done. Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> Reviewed-by: Andre Przywara <andre.przywara@xxxxxxxxxx> Reviewed-by: Stefano Stabellini <sstabellini@xxxxxxxxxx> master commit: 553ac37137c2d1c03bf1b69cfb192ffbfe29daa4 master date: 2017-12-04 11:04:18 +0100 commit 19dcd8e47dfc81b8e9f867ee79c7ff8e15b975fb Author: Jan Beulich <jbeulich@xxxxxxxx> Date: Wed Dec 20 15:43:53 2017 +0100 gnttab: correct GNTTABOP_cache_flush empty batch handling Jann validly points out that with a caller bogusly requesting a zero- element batch with non-zero high command bits (the ones used for continuation encoding), the assertion right before the call to hypercall_create_continuation() would trigger. A similar situation would arise afaict for non-empty batches with op and/or length zero in every element. While we want the former to succeed (as we do elsewhere for similar no-op requests), the latter can clearly be converted to an error, as this is a state that can't be the result of a prior operation. Take the opportunity and also correct the order of argument checks: We shouldn't accept zero-length elements with unknown bits set in "op". Also constify cache_flush()'s first parameter. Reported-by: Jann Horn <jannh@xxxxxxxxxx> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> Reviewed-by: Andre Przywara <andre.przywara@xxxxxxxxxx> Acked-by: Stefano Stabellini <sstabellini@xxxxxxxxxx> master commit: 9c22e4d67f5552c7c896ed83bd95d5d4c5837a9d master date: 2017-12-04 11:03:32 +0100 commit e5364c32c650fef60b91b9be9b10f38055ffc2cf Author: Tom Lendacky <thomas.lendacky@xxxxxxx> Date: Wed Dec 20 15:43:14 2017 +0100 x86/microcode: Add support for fam17h microcode loading The size for the Microcode Patch Block (MPB) for an AMD family 17h processor is 3200 bytes. Add a #define for fam17h so that it does not default to 2048 bytes and fail a microcode load/update. Signed-off-by: Tom Lendacky <thomas.lendacky@xxxxxxx> Signed-off-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Reviewed-by: Borislav Petkov <bp@xxxxxxxxx> Signed-off-by: Ingo Molnar <mingo@xxxxxxxxxx> [Linux commit f4e9b7af0cd58dd039a0fb2cd67d57cea4889abf] Ported to Xen. Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Acked-by: Jan Beulich <jbeulich@xxxxxxxx> master commit: 61d458ba8c171809e8dd9abd19339c87f3f934ca master date: 2017-12-13 14:30:10 +0000 commit e2dc7b584f4c7ab6ad7ab543e5cf7ee2e6d1d569 Author: Jan Beulich <jbeulich@xxxxxxxx> Date: Wed Dec 20 15:42:42 2017 +0100 x86/mm: drop bogus paging mode assertion Olaf has observed this assertion to trigger after an aborted migration of a PV guest: (XEN) Xen call trace: (XEN) [<ffff82d0802a85dc>] do_page_fault+0x39f/0x55c (XEN) [<ffff82d08036b7d8>] x86_64/entry.S#handle_exception_saved+0x66/0xa4 (XEN) [<ffff82d0802a9274>] __copy_to_user_ll+0x22/0x30 (XEN) [<ffff82d0802772d4>] update_runstate_area+0x19c/0x228 (XEN) [<ffff82d080277371>] domain.c#_update_runstate_area+0x11/0x39 (XEN) [<ffff82d080277596>] context_switch+0x1fd/0xf25 (XEN) [<ffff82d0802395c5>] schedule.c#schedule+0x303/0x6a8 (XEN) [<ffff82d08023d067>] softirq.c#__do_softirq+0x6c/0x95 (XEN) [<ffff82d08023d0da>] do_softirq+0x13/0x15 (XEN) [<ffff82d08036b2f1>] x86_64/entry.S#process_softirqs+0x21/0x30 Release builds work fine, which is a first indication that the assertion isn't really needed. What's worse though - there appears to be a timing window where the guest runs in shadow mode, but not in log-dirty mode, and that is what triggers the assertion (the same could, afaict, be achieved by test- enabling shadow mode on a PV guest). This is because turing off log- dirty mode is being performed in two steps: First the log-dirty bit gets cleared (paging_log_dirty_disable() [having paused the domain] -> sh_disable_log_dirty() -> shadow_one_bit_disable()), followed by unpausing the domain and only then clearing shadow mode (via shadow_test_disable(), which pauses the domain a second time). Hence besides removing the ASSERT() here (or optionally replacing it by explicit translate and refcounts mode checks, but this seems rather pointless now that the three are tied together) I wonder whether either shadow_one_bit_disable() should turn off shadow mode if no other bit besides PG_SH_enable remains set (just like shadow_one_bit_enable() enables it if not already set), or the domain pausing scope should be extended so that both steps occur without the domain getting a chance to run in between. Reported-by: Olaf Hering <olaf@xxxxxxxxx> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> Reviewed-by: Tim Deegan <tim@xxxxxxx> Acked-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> master commit: b95f7be32d668fa4b09300892ebe19636ecebe36 master date: 2017-12-12 16:56:15 +0100 commit c8f4f45e04dd782ac5dfdf58866339ac97186324 Author: Daniel Kiper <daniel.kiper@xxxxxxxxxx> Date: Wed Dec 20 15:42:13 2017 +0100 x86/mb2: avoid Xen image when looking for module/crashkernel position Commit e22e1c4 (x86/EFI: avoid Xen image when looking for module/kexec position) added relevant check for EFI case. However, since commit f75a304 (x86: add multiboot2 protocol support for relocatable images) Multiboot2 compatible bootloaders are able to relocate Xen image too. So, we have to avoid also Xen image region in such cases. Reported-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Reported-by: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> Signed-off-by: Daniel Kiper <daniel.kiper@xxxxxxxxxx> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx> master commit: 9589927e5bf9e123ec42b6e0b0809f153bd92732 master date: 2017-12-12 14:30:53 +0100 commit 4150501b717e7fde77c9ab4e96dd9916d7345b55 Author: Sergey Dyasli <sergey.dyasli@xxxxxxxxxx> Date: Wed Dec 20 15:41:33 2017 +0100 x86/vvmx: don't enable vmcs shadowing for nested guests Running "./xtf_runner vvmx" in L1 Xen under L0 Xen produces the following result on H/W with VMCS shadowing: Test: vmxon Failure in test_vmxon_in_root_cpl0() Expected 0x8200000f: VMfailValid(15) VMXON_IN_ROOT Got 0x82004400: VMfailValid(17408) <unknown> Test result: FAILURE This happens because SDM allows vmentries with enabled VMCS shadowing VM-execution control and VMCS link pointer value of ~0ull. But results of a nested VMREAD are undefined in such cases. Fix this by not copying the value of VMCS shadowing control from vmcs01 to vmcs02. Signed-off-by: Sergey Dyasli <sergey.dyasli@xxxxxxxxxx> Acked-by: Kevin Tian <kevin.tian@xxxxxxxxx> master commit: 19fdb8e258619aea265af9c183e035e545cbc2d2 master date: 2017-12-01 19:03:27 +0000 commit ab7be6ce4ac8cc3f32952d8c9c260412e780e939 Author: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Date: Wed Dec 20 15:40:58 2017 +0100 xen/pv: Construct d0v0's GDT properly c/s cf6d39f8199 "x86/PV: properly populate descriptor tables" changed the GDT to reference zero_page for intermediate frames between the guest and Xen frames. Because dom0_construct_pv() doesn't call arch_set_info_guest(), some bits of initialisation are missed, including the pv_destroy_gdt() which initially fills the references to zero_page. In practice, this means there is a window between starting and the first call to HYPERCALL_set_gdt() were lar/lsl/verr/verw suffer non-architectural behaviour. Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx> master commit: 08f27f4468eedbeccaac9fdda4ef732247efd74e master date: 2017-12-01 19:03:26 +0000 commit f3fb6673d89858fa522037cc9b9475c188214998 Author: Jan Beulich <jbeulich@xxxxxxxx> Date: Wed Dec 20 15:39:44 2017 +0100 update Xen version to 4.10.1-pre (qemu changes not included) _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |