[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH] firmware: don't build hvmloader if it is not needed
On Wed, 17 Feb 2021, Jan Beulich wrote: > On 16.02.2021 19:31, Stefano Stabellini wrote: > > On Mon, 15 Feb 2021, Jan Beulich wrote: > >> On 13.02.2021 14:50, Marek Marczykowski-Górecki wrote: > >>> On Fri, Feb 12, 2021 at 06:05:40PM -0800, Stefano Stabellini wrote: > >>>> If rombios, seabios and ovmf are all disabled, don't attempt to build > >>>> hvmloader. > >>> > >>> What if you choose to not build any of rombios, seabios, ovmf, but use > >>> system one instead? Wouldn't that exclude hvmloader too? > >> > >> Even further - one can disable all firmware and have every guest > >> config explicitly specify the firmware to use, afaict. > > > > I didn't realize there was a valid reason for wanting to build hvmloader > > without rombios, seabios, and ovfm. > > > > > >>> This heuristic seems like a bit too much, maybe instead add an explicit > >>> option to skip hvmloader? > >> > >> +1 (If making this configurable is needed at all - is having > >> hvmloader without needing it really a problem?) > > > > The hvmloader build fails on Alpine Linux x86: > > > > https://gitlab.com/xen-project/xen/-/jobs/1033722465 > > > > > > > > I admit I was just trying to find the fastest way to make those Alpine > > Linux builds succeed to unblock patchew: although the Alpine Linux > > builds are marked as "allow_failure: true" in gitlab-ci, patchew will > > still report the whole battery of tests as "failure". As a consequence > > the notification emails from patchew after a build of a contributed > > patch series always says "job failed" today, making it kind of useless. > > See attached. > > > > I would love if somebody else took over this fix as I am doing this > > after hours, but if you have a simple suggestion on how to fix the > > Alpine Linux hvmloader builds, or skip the build when appropriate, I can > > try to follow up. > > There is an issue with the definition of uint64_t there. Initial > errors like > > hvmloader.c: In function 'init_vm86_tss': > hvmloader.c:202:39: error: left shift count >= width of type > [-Werror=shift-count-overflow] > 202 | ((uint64_t)TSS_SIZE << 32) | virt_to_phys(tss)); > > already hint at this, but then > > util.c: In function 'get_cpu_mhz': > util.c:824:15: error: conversion from 'long long unsigned int' to 'uint64_t' > {aka 'long unsigned int'} changes value from '4294967296000000' to '0' > [-Werror=overflow] > 824 | cpu_khz = 1000000ull << 32; > > is quite explicit: "aka 'long unsigned int'"? This is a 32-bit > environment, after all. I suspect the build picks up headers > (stdint.h here in particular) intended for 64-bit builds only. > Can you check whether "gcc -m32" properly sets include paths > _different_ from those plain "gcc" sets, if the headers found in > the latter case aren't suitable for the former? Or alternatively > is the Alpine build environment set up incorrectly, in that it > lacks 32-bit devel packages? > > As an aside I don't think it's really a good idea to have > hvmloader depend on any external headers. Just like the > hypervisor it's a free-standing binary, and hence ought to be > free of any dependencies on the build/host environment. All the automation containers are available for anybody to use, so FYI you can repro the issue by doing inside your Xen repo: docker run -it -v `pwd`:/build registry.gitlab.com/xen-project/xen/alpine:3.12 CC=gcc bash automation/scripts/build I did just that and ran a simple test: ~ # gcc -m32 test.c -o test /usr/lib/gcc/x86_64-alpine-linux-musl/9.3.0/../../../../x86_64-alpine-linux-musl/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-alpine-linux-musl/9.3.0/../../../libssp_nonshared.a when searching for -lssp_nonshared /usr/lib/gcc/x86_64-alpine-linux-musl/9.3.0/../../../../x86_64-alpine-linux-musl/bin/ld: skipping incompatible /usr/lib/libssp_nonshared.a when searching for -lssp_nonshared /usr/lib/gcc/x86_64-alpine-linux-musl/9.3.0/../../../../x86_64-alpine-linux-musl/bin/ld: cannot find -lssp_nonshared /usr/lib/gcc/x86_64-alpine-linux-musl/9.3.0/../../../../x86_64-alpine-linux-musl/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-alpine-linux-musl/9.3.0/libgcc.a when searching for -lgcc /usr/lib/gcc/x86_64-alpine-linux-musl/9.3.0/../../../../x86_64-alpine-linux-musl/bin/ld: cannot find -lgcc /usr/lib/gcc/x86_64-alpine-linux-musl/9.3.0/../../../../x86_64-alpine-linux-musl/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-alpine-linux-musl/9.3.0/../../../libgcc_s.so.1 when searching for libgcc_s.so.1 /usr/lib/gcc/x86_64-alpine-linux-musl/9.3.0/../../../../x86_64-alpine-linux-musl/bin/ld: skipping incompatible /usr/lib/libgcc_s.so.1 when searching for libgcc_s.so.1 /usr/lib/gcc/x86_64-alpine-linux-musl/9.3.0/../../../../x86_64-alpine-linux-musl/bin/ld: cannot find libgcc_s.so.1 /usr/lib/gcc/x86_64-alpine-linux-musl/9.3.0/../../../../x86_64-alpine-linux-musl/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-alpine-linux-musl/9.3.0/libgcc.a when searching for -lgcc /usr/lib/gcc/x86_64-alpine-linux-musl/9.3.0/../../../../x86_64-alpine-linux-musl/bin/ld: cannot find -lgcc collect2: error: ld returned 1 exit status ~ # cat test.c #include <stdio.h> int main() { printf("DEBUG\n"); } Given this, I take there is no 32bit build env? A bit of Googling tells me that gcc on Alpine Linux is compiled without multilib support. That said I was looking at the Alpine Linux APKBUILD script: https://gitlab.alpinelinux.org/alpine/aports/-/blob/master/main/xen/APKBUILD And I noticed this patch that looks suspicious: https://gitlab.alpinelinux.org/alpine/aports/-/blob/master/main/xen/musl-hvmloader-fix-stdint.patch
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |