[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [XenSummit 2017] Notes from the PVH toolstack interface session



On Mon, Jul 17, 2017 at 11:10:50AM +0100, Andrew Cooper wrote:
> On 17/07/17 10:36, Roger Pau Monné wrote:
> > Hello,
> >
> > I didn't actually take notes, so this is from the top of my head. If
> > anyone took notes or remember something different, please feel free to
> > correct it.
> >
> > This is the output from the PVH toolstack interface session. The
> > participants where: Ian Jackson, Wei Liu, George Dunlap, Vincent
> > Legout and myself.
> >
> > We agreed on the following interface for xl configuration files:
> >
> >     type = "hvm | pv | pvh"
> >
> > This is going to supersede the "builder" option present in xl. Both
> > options are mutually exclusive. The "builder" option is going to be
> > marked as deprecated once the new "type" option is implemented.
> >
> > In order to decide how to boot the guest the following options will be
> > available. Note that they are mutually exclusive.
> 
> I presume you mean the kernel/ramdisk/cmdline are mutually exclusive
> with firmware?

Yes, sorry that's confusing. Either you use kernel, firmware or
bootloader.

> >     kernel = "<path>"
> >     ramdisk = "<path>"
> >     cmdline = "<string>"
> >
> > <path>: relative or full path in the filesystem.
> 
> Please can xl or libxl's (not entirely sure which) path handling be
> fixed as part of this work.  As noted in
> http://xenbits.xen.org/docs/xtf/index.html#errata, path handling is
> inconsistent as to whether it allows paths relative to the .cfg file. 
> All paths should support being relative to the cfg file, as that is the
> most convenient for the end user to use.
> 
> > Boot directly into the kernel/ramdisk provided. In this case the
> > kernel must be available somewhere in the toolstack filesystem
> > hierarchy.
> >
> >     firmware = "ovmf | uefi | bios | seabios | rombios | pvgrub"
> 
> What is the purpose of having uefi and bios in there?  ovmf is the uefi
> implementation, and {rom,sea}bios are the bios implementations.
> 
> How does someone specify ovmf + seabios as a CSM?

Hm, I have no idea. How is this done usually, is ovmf built with
seabios support, or is it fetched by ovmf from the uefi partition?

> > This allows to load a firmware inside of the guest and run it in guest
> > mode. Note that the firmware needs to support booting in PVH mode.
> >
> > There's no plan to support any bios or pvgrub ATM for PVH, those
> > options are simply listed for completeness. Also, generic options like
> > uefi or bios would be aliases to a concrete implementation by the
> > toolstack, ie: uefi -> ovmf, bios -> seabios most likely.
> 
> Oh - here is the reason.  -1 to this idea.  We don't want to explicitly
> let people choose options which are liable to change under their feet if
> they were to boot the same .cfg file on a newer version of Xen, as their
> VM will inevitable break.

Noted, I think not allowing bios or uefi is fine, I would rather
document in the man page that our recommended bios implementation is
seabios and the uefi one ovmf.

> >     bootloader = "pygrub"
> >
> > Run a specific binary in the toolstack domain that's going to provide
> > a kernel, ramdisk and cmdline as output. This is mostly pygrub, that
> > accesses the guest disk image and extracts the kernel/ramdisk/cmdline
> > from it.
> >
> > We also spoke about the libxl interface. This is going to require
> > changes to libxl_domain_build_info, which obviously need to be
> > performed in an API compatible way.
> >
> > A new libxl_domain_type needs to be added (PVH) and the new "type"
> > config option is going to map to the "type" field in the
> > libxl_domain_create_info struct.
> >
> > While looking at the contents of the libxl_domain_build_info we
> > realized that there was a bunch of duplication between the
> > domain-specific fields and the top level ones. Ie: there's a top level
> > "kernel" field and one inside of the pv nested structure. It would be
> > interesting to prevent adding a new pvh structure, and instead move
> > all the fields to the top level structure (libxl_domain_build_info).
> >
> > I think that's all of it, as said in the beginning, if anything is
> > missing feel free to add it.
> >
> > Regarding the implementation work itself, I'm currently quite busy
> > with other PVH stuff, so I would really appreciate if someone could
> > take care of this.
> >
> > I think this should be merged in 4.10, so that the toolstack finally
> > has a stable interface to create PVH guests and we can start
> > announcing this. Without this work, even if the PVH DomU ABI is
> > stable, there's no way anyone is going to use it.
> 
> Some other questions.
> 
> Where does hvmloader fit into this mix?

Right, I wasn't planning anyone using hvmloader, but there's no reason
to prevent it. I guess it would fit into the "firmware" option, but
then you should be able to use something like: firmware = "hvmloader +
ovmf".

What would be the purpose of using hvmloader inside of a PVH guest?
Hardware initialization?

> How does firmware_override= work in this new world?

firmware_override is not documented in xl.cfg(5), but I'm not sure we
should support it for PVH. AFAICT the new firmware option should
supersede firmware_override for PVH.

> How about firmware=
> taking a <path> to allow for easy testing of custom binaries?

Yes, this is my mistake, we agreed that firmware should also accept a
path to a binary.

> Instead of kernel= and ramdisk=, it would be better to generalise to
> something like modules=[...], perhaps with kernel being an alias for
> module[0] etc.  hvmloader already takes multiple binaries using the PVH
> module system, and PV guests are perfectly capable of multiple modules
> as well.  One specific example where an extra module would be very
> helpful is for providing the cloudinit install config file.

I might prefer to keep the current kernel = "..." and convert ramdisk
into a list named modules. Do you think (this also applies to xl/libxl
maintainers) we could simply not support the ramdisk option for PVH?

IMHO that might cause some headache for people converting from classic
PV to PVH. In which case (if we have to support ramdisk anyway) I
wouldn't make the introduction of the modules option mandatory for
this work. I'm trying to limit this to something sensible that
hopefully can be merged into 4.10.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.