[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 4/5] SUPPORT.md: Move descriptions up before Status info




On 12/04/2018, 19:27, "Ian Jackson" <ian.jackson@xxxxxxxxxxxxx> wrote:

    This turns all the things which were treated as caveats, but which
    don't need to be footnoted in the matrix, into descriptions.
    
    For the benefit of the support matrix generator, this patch (or a
    version of it) should be backported to 4.10.
    
    Signed-off-by: Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>

Ian: I manually checked that the table for 4.11 matches SUPPORT.md and this 
patch and against the output at 
https://xenbits.xen.org/people/iwj/2018/support-matrix-example-B-v1/t.html

There were a couple of minor text changes for grammar reasons, which I noticed 
and highlighted.

I also checked the code motions. There are some things which need to be pointed 
out, but they should not prevent this series from being checked in.

However, a couple were missed
* ### PV Console (frontend) => missed moving the note (which is a definition)

There is a rendering issue also

Reviewed-by: Lars Kurth <lars.kurth@xxxxxxxxxx>

I also spotted a few other inconsistencies, which we probably should fix, but 
these need backporting
* ARM: 16K and 64K page granularity in guests
* ARM: Guest Device Tree support
* ARM: Guest ACPI support
In all the other section headers we use x86/ or ARM/
    ---
     SUPPORT.md | 213 
++++++++++++++++++++++++++++++++-----------------------------
     1 file changed, 111 insertions(+), 102 deletions(-)
    
    diff --git a/SUPPORT.md b/SUPPORT.md
    index 264b23f..5ae84cf 100644
    --- a/SUPPORT.md
    +++ b/SUPPORT.md
    @@ -58,32 +58,29 @@ for the definitions of the support status levels etc.

[snip]
     
     ## Guest Type
     
     ### x86/PV
     
    -    Status: Supported
    -
     Traditional Xen PV guest
     
     No hardware requirements
     
    -### x86/HVM
    +    Status: Supported
     
    -    Status, domU: Supported

    +### x86/HVM
     
     Fully virtualised guest using hardware virtualisation extensions
     
     Requires hardware virtualisation support (Intel VMX / AMD SVM)
     
    -### x86/PVH
    -
         Status, domU: Supported
    -    Status, dom0: Experimental
    +
    +### x86/PVH
     
     PVH is a next-generation paravirtualized mode
     designed to take advantage of hardware virtualization support when 
possible.

Looks correct from a mere refactoring perspective, but generates some odd 
behaviour in 
https://xenbits.xen.org/people/iwj/2018/support-matrix-example-B-v1/t.html

The underlying reason is that we had some headline re-names between 4.10 and 
4.11. e.g.
ARM guest => ARM

And some support statement changes, e.g. in x86/HVM guest
Status: Supported => Status, domU: Supported

We probably need to go through some of these in 4.10 and fix them
But for 4.11 this is correct

I attached a little screenshot to explain

The implication is that we need to minimize unnecessary changes to 
a) headings
b) clarifications to status before the colon
or backport them to older versions of SUPPORT.md. Otherwise the generated table 
will become confusing

    @@ -93,12 +90,15 @@ Requires hardware virtualisation support (Intel VMX / 
AMD SVM).
     
     Dom0 support requires an IOMMU (Intel VT-d / AMD IOMMU).
     
    -### ARM
    +    Status, domU: Supported
    +    Status, dom0: Experimental
     
    -    Status: Supported
    +### ARM
     
     ARM only has one guest type at the moment
     
    +    Status: Supported
    +
     ## Toolstack
     
     ### xl
    @@ -107,12 +107,12 @@ ARM only has one guest type at the moment
     
     ### Direct-boot kernel image format
     
    +Format which the toolstack accepts for direct-boot kernels
    +
         Supported, x86: bzImage, ELF
         Supported, ARM32: zImage
         Supported, ARM64: Image
     
    -Format which the toolstack accepts for direct-boot kernels
    -

Note: the format here is wrong in both 4.10 and 4.11, this should be something 
like

         Status, zImage (ARM32): Supported

Lars will submit a separate patch

     ### Dom0 init support for xl
     
         Status, SysV: Supported
    @@ -121,10 +121,10 @@ Format which the toolstack accepts for direct-boot 
kernels
     
     ### JSON output support for xl
     
    -    Status: Experimental
    -
     Output of information in machine-parseable JSON format
     
    +    Status: Experimental
    +
     ### Open vSwitch integration for xl
     
         Status, Linux: Supported
    @@ -157,17 +157,18 @@ Output of information in machine-parseable JSON format
     
     ### Hypervisor 'debug keys'
     
    -    Status: Supported, not security supported
    -
     These are functions triggered either from the host serial console,
     or via the xl 'debug-keys' command,
     which cause Xen to dump various hypervisor state to the console.
     
    +    Status: Supported, not security supported
    +
     ### Hypervisor synchronous console output (sync_console)
     
    +Xen command-line flag to force synchronous console output.
    +
         Status: Supported, not security supported
     
    -Xen command-line flag to force synchronous console output.
     Useful for debugging, but not suitable for production environments
     due to incurred overhead.
     
    @@ -179,56 +180,54 @@ Debugger to debug ELF guests
     
     ### Soft-reset for PV guests
     
    -    Status: Supported
    -
     Soft-reset allows a new kernel to start 'from scratch' with a fresh VM 
state,
     but with all the memory from the previous state of the VM intact.
     This is primarily designed to allow "crash kernels",
     which can do core dumps of memory to help with debugging in the event of a 
crash.
     
    -### xentrace
    +    Status: Supported
     
    -    Status, x86: Supported
    +### xentrace
     
     Tool to capture Xen trace buffer data
     
    -### gcov
    +    Status, x86: Supported
     
    -    Status: Supported, Not security supported
    +### gcov
     
     Export hypervisor coverage data suitable for analysis by gcov or lcov.
     
    +    Status: Supported, Not security supported
    +
     ## Memory Management
     
     ### Dynamic memory control
     
    -    Status: Supported
    -
     Allows a guest to add or remove memory after boot-time.
     This is typically done by a guest kernel agent known as a "balloon driver".
     
    -### Populate-on-demand memory
    +    Status: Supported
     
    -    Status, x86 HVM: Supported
    +### Populate-on-demand memory
     
     This is a mechanism that allows normal operating systems with only a 
balloon driver
     to boot with memory < maxmem.
     
    -### Memory Sharing
    +    Status, x86 HVM: Supported
     
    -    Status, x86 HVM: Expermental
    +### Memory Sharing
     
     Allow sharing of identical pages between guests
     
    -### Memory Paging
    +    Status, x86 HVM: Expermental
     
    -    Status, x86 HVM: Experimenal
    +### Memory Paging
     
     Allow pages belonging to guests to be paged to disk
     
    -### Transcendent Memory
    +    Status, x86 HVM: Experimenal
     
    -    Status: Experimental
    +### Transcendent Memory
     
     Transcendent Memory (tmem) allows the creation of hypervisor memory pools
     which guests can use to store memory
    @@ -236,96 +235,100 @@ rather than caching in its own memory or swapping to 
disk.
     Having these in the hypervisor
     can allow more efficient aggregate use of memory across VMs.
     
    -### Alternative p2m
    +    Status: Experimental
     
    -    Status, x86 HVM: Tech Preview
    -    Status, ARM: Tech Preview
    +### Alternative p2m
     
     Allows external monitoring of hypervisor memory
     by maintaining multiple physical to machine (p2m) memory mappings.
     
    +    Status, x86 HVM: Tech Preview
    +    Status, ARM: Tech Preview
    +
     ## Resource Management
     
     ### CPU Pools
     
    -    Status: Supported
    -
     Groups physical cpus into distinct groups called "cpupools",
     with each pool having the capability
     of using different schedulers and scheduling properties.
     
    -### Credit Scheduler
    -
         Status: Supported
     
    +### Credit Scheduler
    +
     A weighted proportional fair share virtual CPU scheduler.
     This is the default scheduler.
     
    -### Credit2 Scheduler
    -
         Status: Supported
     
    +### Credit2 Scheduler
    +
     A general purpose scheduler for Xen,
     designed with particular focus on fairness, responsiveness, and scalability
     
    -### RTDS based Scheduler
    +    Status: Supported
     
    -    Status: Experimental
    +### RTDS based Scheduler
     
     A soft real-time CPU scheduler
     built to provide guaranteed CPU capacity to guest VMs on SMP hosts
     
    +    Status: Experimental
    +
     ### ARINC653 Scheduler
     
    +A periodically repeating fixed timeslice scheduler.
    +
         Status: Supported
     
    -A periodically repeating fixed timeslice scheduler.
     Currently only single-vcpu domains are supported.
     
     ### Null Scheduler
     
    -    Status: Experimental
    -
     A very simple, very static scheduling policy
     that always schedules the same vCPU(s) on the same pCPU(s).
     It is designed for maximum determinism and minimum overhead
     on embedded platforms.
     
    -### NUMA scheduler affinity
    +    Status: Experimental
     
    -    Status, x86: Supported
    +### NUMA scheduler affinity
     
     Enables NUMA aware scheduling in Xen
     
    +    Status, x86: Supported
    +
     ## Scalability
     
     ### Super page support
     
    -    Status, x86 HVM/PVH, HAP: Supported
    -    Status, x86 HVM/PVH, Shadow, 2MiB: Supported
    -    Status, ARM: Supported
    -
     NB that this refers to the ability of guests

The beginning of this sentence should probably be changed to
"This feature refers to the ability of guests ..."

     to have higher-level page table entries point directly to memory,
     improving TLB performance.

     On ARM, and on x86 in HAP mode,
     the guest has whatever support is enabled by the hardware.
    +
    +This feature is independent
    +of the ARM "page granularity" feature (see below).
    +
    +    Status, x86 HVM/PVH, HAP: Supported
    +    Status, x86 HVM/PVH, Shadow, 2MiB: Supported
    +    Status, ARM: Supported
    +
     On x86 in shadow mode, only 2MiB (L2) superpages are available;
     furthermore, they do not have the performance characteristics
     of hardware superpages.
     
    -Also note is feature independent
    -of the ARM "page granularity" feature (see below).
    -
     ### x86/PVHVM
     
    -    Status: Supported
    -
     This is a useful label for a set of hypervisor features
     which add paravirtualized functionality to HVM guests
     for improved performance and scalability.
     This includes exposing event channels to HVM guests.
     
    +    Status: Supported
    +
     ## High Availability and Fault Tolerance
     
     ### Remus Fault Tolerance
    @@ -338,38 +341,38 @@ This includes exposing event channels to HVM guests.
     
     ### x86/vMCE
     
    -    Status: Supported
    -
     Forward Machine Check Exceptions to appropriate guests
     
    +    Status: Supported
    +
     ## Virtual driver support, guest side
     
     ### Blkfront
     
    +Guest-side driver capable of speaking the Xen PV block protocol
    +
         Status, Linux: Supported
         Status, FreeBSD: Supported, Security support external
         Status, NetBSD: Supported, Security support external
         Status, OpenBSD: Supported, Security support external
         Status, Windows: Supported
     
    -Guest-side driver capable of speaking the Xen PV block protocol
    -
     ### Netfront
     
    +Guest-side driver capable of speaking the Xen PV networking protocol
    +
         Status, Linux: Supported
         Status, FreeBSD: Supported, Security support external
         Status, NetBSD: Supported, Security support external
         Status, OpenBSD: Supported, Security support external
         Status, Windows: Supported
     
    -Guest-side driver capable of speaking the Xen PV networking protocol
    -
     ### PV Framebuffer (frontend)
     
    -    Status, Linux (xen-fbfront): Supported
    -
     Guest-side driver capable of speaking the Xen PV Framebuffer protocol
     
    +    Status, Linux (xen-fbfront): Supported
    +
     ### PV Console (frontend)
     
         Status, Linux (hvc_xen): Supported
    @@ -381,11 +384,11 @@ Guest-side driver capable of speaking the Xen PV 
console protocol
     
     ### PV keyboard (frontend)
     
    -    Status, Linux (xen-kbdfront): Supported
    -
     Guest-side driver capable of speaking the Xen PV keyboard protocol.
     Note that the "keyboard protocol" includes mouse / pointer support as well.
     
    +    Status, Linux (xen-kbdfront): Supported
    +
     ### PV USB (frontend)
     
         Status, Linux: Supported
    @@ -399,22 +402,22 @@ there is currently no xl support.
     
     ### PV TPM (frontend)
     
    -    Status, Linux (xen-tpmfront): Tech Preview
    -
     Guest-side driver capable of speaking the Xen PV TPM protocol
     
    -### PV 9pfs frontend
    +    Status, Linux (xen-tpmfront): Tech Preview
     
    -    Status, Linux: Tech Preview
    +### PV 9pfs frontend
     
     Guest-side driver capable of speaking the Xen 9pfs protocol
     
    -### PVCalls (frontend)
    -
         Status, Linux: Tech Preview
     
    +### PVCalls (frontend)
    +
     Guest-side driver capable of making pv system calls
     
    +    Status, Linux: Tech Preview
    +
     ## Virtual device support, host side
     
     For host-side virtual device support,
    @@ -423,6 +426,8 @@ unless otherwise noted.
     
     ### Blkback
     
    +Host-side implementations of the Xen PV block protocol.
    +
         Status, Linux (xen-blkback): Supported
         Status, QEMU (xen_disk), raw format: Supported
         Status, QEMU (xen_disk), qcow format: Supported
    @@ -433,42 +438,41 @@ unless otherwise noted.
         Status, Blktap2, raw format: Deprecated
         Status, Blktap2, vhd format: Deprecated
     
    -Host-side implementations of the Xen PV block protocol.
     Backends only support raw format unless otherwise specified.
     
     ### Netback
     
    +Host-side implementations of Xen PV network protocol
    +
         Status, Linux (xen-netback): Supported
         Status, FreeBSD (netback): Supported, Security support external
         Status, NetBSD (xennetback): Supported, Security support external
     
    -Host-side implementations of Xen PV network protocol
    -
     ### PV Framebuffer (backend)
     
    -    Status, QEMU: Supported
    -
     Host-side implementation of the Xen PV framebuffer protocol
     
    -### PV Console (xenconsoled)
    +    Status, QEMU: Supported
     
    -    Status: Supported
    +### PV Console (xenconsoled)
     
     Host-side implementation of the Xen PV console protocol
     
    -### PV keyboard (backend)
    +    Status: Supported
     
    -    Status, QEMU: Supported
    +### PV keyboard (backend)
     
     Host-side implementation of the Xen PV keyboard protocol.
     Note that the "keyboard protocol" includes mouse / pointer support as well.
     
    -### PV USB (backend)
    -
         Status, QEMU: Supported
     
    +### PV USB (backend)
    +
     Host-side implementation of the Xen PV USB protocol
     
    +    Status, QEMU: Supported
    +
     ### PV SCSI protocol (backend)
     
         Status, Linux: Experimental
    @@ -499,11 +503,11 @@ but has no xl support.
     
     ### Driver Domains
     
    -    Status: Supported, with caveats
    -
     "Driver domains" means allowing non-Domain 0 domains
     with access to physical devices to act as back-ends.
     
    +    Status: Supported, with caveats
    +
     See the appropriate "Device Passthrough" section
     for more information about security support.
     
    @@ -553,13 +557,13 @@ with dom0, driver domains, stub domains, domUs, and 
so on.
     
     ### x86/Nested PV
     
    -    Status, x86 Xen HVM: Tech Preview
    -
     This means running a Xen hypervisor inside an HVM domain on a Xen system,
     with support for PV L2 guests only
     (i.e., hardware virtualization extensions not provided
     to the guest).
     
    +    Status, x86 Xen HVM: Tech Preview
    +
     This works, but has performance limitations
     because the L1 dom0 can only access emulated L1 devices.
     
    @@ -568,19 +572,19 @@ but nobody has reported on performance.
     
     ### x86/Nested HVM
     
    -    Status, x86 HVM: Experimental
    -
     This means providing hardware virtulization support to guest VMs
     allowing, for instance, a nested Xen to support both PV and HVM guests.
     It also implies support for other hypervisors,
     such as KVM, Hyper-V, Bromium, and so on as guests.
     
    -### vPMU
    +    Status, x86 HVM: Experimental
     
    -    Status, x86: Supported, Not security supported
    +### vPMU
     
     Virtual Performance Management Unit for HVM guests
     
    +    Status, x86: Supported, Not security supported
    +
     Disabled by default (enable with hypervisor command line option).
     This feature is not security supported: see 
http://xenbits.xen.org/xsa/advisory-163.html
     
    @@ -604,14 +608,14 @@ when used to remove drivers and backends from domain 0
     
     ### x86/Multiple IOREQ servers
     
    -   Status: Experimental
    -
     An IOREQ server provides emulated devices to HVM and PVH guests.
     QEMU is normally the only IOREQ server,
     but Xen has support for multiple IOREQ servers.
     This allows for custom or proprietary device emulators
     to be used in addition to QEMU.
     
    +   Status: Experimental
    +
     ### ARM/Non-PCI device passthrough
     
         Status: Supported, not security supported
    @@ -635,7 +639,11 @@ No support for QEMU backends in a 16K or 64K domain.
     
     ## Virtual Hardware, QEMU
     
    -These are devices available in HVM mode using a qemu devicemodel (the 
default).
    +This section describes supported devices available in HVM mode using a
    +qemu devicemodel (the default).
    +
    +    Status: Support scope restricted 
    +
     Note that other devices are available but not security supported.

This is causing a rendering issue: the footnote is not generated in the right 
place. It is added to " stgvga". Presumably a corner case in the table 
generation tool
     
     ### x86/Emulated platform devices (QEMU):
    @@ -685,9 +693,10 @@ See the section **Blkback** for image formats 
supported by QEMU.
     
     ### x86/HVM iPXE
     
    +Booting a guest via PXE.
    +
         Status: Supported, with caveats
     
    -Booting a guest via PXE.
     PXE inherently places full trust of the guest in the network,
     and so should only be used
     when the guest network is under the same administrative control
    @@ -695,17 +704,17 @@ as the guest itself.
     
     ### x86/HVM BIOS
     
    +Booting a guest via guest BIOS firmware
    +
         Status, SeaBIOS (qemu-xen): Supported
         Status, ROMBIOS (qemu-xen-traditional): Supported
     
    -Booting a guest via guest BIOS firmware
    -
     ### x86/HVM OVMF
     
    -    Status, qemu-xen: Supported
    -
     OVMF firmware implements the UEFI boot protocol.
     
    +    Status, qemu-xen: Supported
    +
     # Format and definitions
     
     This file contains prose, and machine-readable fragments.
    -- 
    2.1.4

    
    

Attachment: Impact of headingsupport changes in generated table.png
Description: Impact of headingsupport changes in generated table.png

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.