[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH v4 4/7] xen/pvh: Move Xen specific PVH VM initialization out of common code



On 02/28/2018 01:28 PM, Maran Wilson wrote:
> We need to refactor PVH entry code so that support for other hypervisors
> like Qemu/KVM can be added more easily.
>
> This patch moves the small block of code used for initializing Xen PVH
> virtual machines into the Xen specific file. This initialization is not
> going to be needed for Qemu/KVM guests. Moving it out of the common file
> is going to allow us to compile kernels in the future without CONFIG_XEN
> that are still capable of being booted as a Qemu/KVM guest via the PVH
> entry point.
>
> Signed-off-by: Maran Wilson <maran.wilson@xxxxxxxxxx>
> ---
>  arch/x86/pvh.c               | 28 ++++++++++++++++++++--------
>  arch/x86/xen/enlighten_pvh.c | 18 +++++++++++++++++-
>  2 files changed, 37 insertions(+), 9 deletions(-)
>
> diff --git a/arch/x86/pvh.c b/arch/x86/pvh.c
> index b56cb5e7d6ac..2d7a7f4958cb 100644
> --- a/arch/x86/pvh.c
> +++ b/arch/x86/pvh.c
> @@ -72,26 +72,38 @@ static void __init init_pvh_bootparams(void)
>       pvh_bootparams.hdr.type_of_loader = (9 << 4) | 0; /* Xen loader */
>  }
>  
> +/*
> + * If we are trying to boot a Xen PVH guest, it is expected that the kernel
> + * will have been configured to provide the required override for this 
> routine.
> + */
> +void __init __weak xen_pvh_init(void)
> +{
> +     xen_raw_printk("Error: Missing xen PVH initialization\n");

I think this should be printk (or, more precisely, this should not be
xen_raw_printk()): we are here because we are *not* a Xen guest and so
Xen-specific printk will not work. (and the same is true for the next
patch where weak mem_map_via_hcall() is added).

-boris


> +     BUG();
> +}
>


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.