[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Argo HMX Transport for VirtIO meeting minutes



Minutes from the HMX Argo-VirtIO transport topic call held on the 14th
January, 2021.

Thanks to Rich Persaud for organizing and hosting the call, to the
call attendees for the highly productive discussion, and Daniel Smith
for early assistance with the minutes; apologies for my delay in
completing and posting these.

The VirtIO-Argo Development Wiki page has been updated for items discussed:
    
https://openxt.atlassian.net/wiki/spaces/DC/pages/1696169985/VirtIO-Argo+Development+Phase+1
and a PDF copy of the page is attached.

thanks,

Christopher

--------------------------------------------------------------------------------
## Argo: Hypervisor-agnostic Guest interface for x86

Discussed: an interface for invoking Argo via an alternative mechanism to
hypercalls. MSRs suggested.
Objective: a single interface to guests supported by multiple hypervisors,
since a cross-hypervisor solution is a stronger proposal to the VirtIO
Community.
-- was introduced in a reply on the mailing list thread prior to the call:
"Re: [openxt-dev] VirtIO-Argo initial development proposal"
https://lists.xenproject.org/archives/html/xen-devel/2020-12/msg01802.html

Summary notes on the proposal are on the VirtIO-Argo development wiki:
https://openxt.atlassian.net/wiki/spaces/DC/pages/1696169985/VirtIO-Argo+Development+Phase+1#Project:-Hypervisor-agnostic-Hypervisor-Interface

Discussion:
- hypercalls: difficult for portable across hypervisors
- Intel has a MSR range, always invalid: VMware, HyperV and others
  use for virtualization MSRs
- concern: some hypervisors do not intercept MSRs at all
    - so nested hypervisors encounter unexpected behaviour
- perf sensitive to whichever mechanism selected
- alt options exist:
    - HP/Bromium AX uses CPUIDs
    - Microsoft Hyper-V uses EPT faults
- Arm context: hypercalls may be acceptable on Arm hypervisors
    - standard way to do it; can implement Argo in either firmware
      or hypervisor; differences in access instruction
    - on no-hypercall, PV-only hypervisors: may not work at all
  https://lists.xenproject.org/archives/html/xen-devel/2020-12/msg01843.html

Proposal: unlikely that a single mechanism will ever work for all hypervisors,
so plan instead to allow multiple mechanisms and enable the guest device driver
to probe
- a hypervisor can implement as many mechanisms as feasible for it
- guest can select between those presented available
- preference for mechanisms close to platform architecture
- ensure scheme is forward-extensible for new mechanisms later

--------------------------------------------------------------------------------
## Hypervisor-to-guest interrupt delivery: alternative to Xen event channels

Proposed: Argo interrupts delivered via a native mechanism, like MSI delivery,
with destination APIC ID, vector, delivery mode and trigger mode.
Ref: https://lists.xenproject.org/archives/html/xen-devel/2020-12/msg01802.html

- MSIs: OK for guests that support local APIC
- Hypervisors post-Xen learned from Xen: register a vector callback
    - sometimes hardware sets bits
    - MSI not necessary
    - likely arch-specific; could be hypervisor-agnostic on same arch
- Vector approach is right; some OSes may need help though since alloc can
  be hard
    - so a ACPI-type thing or device can assist communicating vector to OS
    - want: OS to register a vector, and driver => hypervisor the vector to use

Context: Xen event channel implementation is not available in all guests;
don't want to require it as a dependency for VirtIO-Argo transport.
- Want: Avoid extra muxing with Argo rings on top of event channels
- Vector-per-ring or Vector-per-CPU? -: Vector-per-CPU is preferable.
    - aim: avoid building muxing policy into the vector allocation logic
- Scalability, interface design consideration/requirement:
    Allow expansion: one vector per CPU => multiple vectors per CPU
    - eg. different priority for different rings:
      will need different vectors to make notifications work correctly
    - to investigate: specify the vector for every ring when registered
      and allow same vector for multiple rings (fwds-compatible)

--------------------------------------------------------------------------------
## Virtual device discovery

Reference: uXen v4v storage driver: uses a bitmap retrieved via ioport
access to enumerate devices available
    - advantages:
        - simple logic in the driver
        - assists on Windows in allocating
    - negatives:
        - very x86-arch-specific; not a cross-architecture design
        - not great interface across multiple hypervisors

Alternative proposal: allocate a range of well-known Argo port addresses

Context: planned extensions to Argo, documented in minutes from the Cambridge
F2F meeting
    - meeting minutes:
https://lists.archive.carbon60.com/xen/devel/577800#577800
    - section with notes on the VirtIO-Argo development wiki page:
https://openxt.atlassian.net/wiki/spaces/DC/pages/1696169985/VirtIO-Argo+Development+Phase+1#Project:-VirtIO-Argo-with-uXen-and-PX-hypervisors
=> concept 1: "implicit destinations" table used when dest unspecified
=> concept 2: toolstack allowed to program the table to connect VMs
              to services

Plan: a VM talks to its storage service via well-known Argo port ID
used for that purpose. Likewise for networking, other services.

- Access to services via a well-known address: consensus OK

Discussion covered:
- communicating endpoint identifiers from source to destination,
  with effects of nesting
- interest expressed in design allowing for capability-based systems
- labels conveyed along the transport are to support the hypervisor doing
  enforcement; provided to receiver for own reasoning if meaningful there
- access to services via well-known identifiers supports out-of-guest
  reasoning and request routing

Notes added to the VirtIO-Argo development wiki page:
https://openxt.atlassian.net/wiki/spaces/DC/pages/1696169985/VirtIO-Argo+Development+Phase+1#Project:-VirtIO-Argo-transport:-Virtual-Device-Discovery

--------------------------------------------------------------------------------
## VirtIO-MMIO driver and Xen; IOREQ

A VirtIO-MMIO transport driver is under development by EPAM for Arm arch,
for an automotive production customer.

Xen needs work to forward guest memory accesses to an emulator, so: porting the
existing Xen-on-x86 feature 'IOREQ' to Arm. Work being reviewed for Xen.
https://lists.xenproject.org/archives/html/xen-devel/2021-01/msg02403.html

- working demonstration: VirtIO block device instead of Xen PV block
- no modifications to the guest OS
- approach viable for x86

Relating IOREQ to VirtIO-Argo transport: not a natural fit due to IOREQ arch
use of the device emulator, shared memory mappings and event channels.

Discussion: could Argo perform the DMA transfers between a guest and the
privileged guest doing emulation for it? Aim for system to work more like
hardware.
Response: consider a new DMA Device Model Operation (DMOP):
has permission model as per foreign map, but enable a guest VM to request bytes
fetched on its behalf. Alternative to foreign mapping - note: design needs to
align with new vIOMMU development, affects paths involved in I/O emulation.

DMOP's ABI is designed to be safe for use from userspace. Xen also has a
longstanding capability for guests to transfer data via the grant copy op.
A new op could enable some perf improvements for introspection: eg.
one hypercall
vs. 4-5 hypercalls + complicated invalidation: helpful for eg. 5-byte accesses.

[ related contexts, adding here post-call:
Vates XCP-NG post on IOREQ:
"Device Emulation in the Xen Hypervisor" by Bobby Eschleman:
https://xcp-ng.org/blog/2020/06/03/device-emulation-in-the-xen-hypervisor/

Intel: External monitoring of VMs via Intel Processor Trace in Xen 4.15:
https://twitter.com/tklengyel/status/1357769674696630274?s=21 ]

--------------------------------------------------------------------------------
## Argo Linux guest-to-userspace driver interface

- Guests that use standard VirtIO drivers, with VirtIO-Argo transport,
  don't need another Argo Linux driver; but:
- Host Platform VMs (eg. Dom0, driver domains, stub domains) run
  userspace software, eg. device model software emulator -- QEMU --
  to implement the backend of split device drivers and do need an interface
  to Argo via kernel that is separate from VirtIO-Argo transport driver.

Argo Linux driver also has a separate function, for providing non-VirtIO
guest-to-guest communication via Argo, to Argo-enlightened VMs.

VSock: explored for Argo to sit underneath an existing Linux interface; assists
app compatibility: standard socket header, syscalls and the transport is
abstracted. Hyper-V implemented a transport protocol under VSock address
family, so Argo could follow.

Question: how to determine the destination Argo endpoint (domid) from the
address provided by a guest initiating comms:
eg. an abstract scheme: "I want to talk to my storage"
    - not simple to insert into VSock; could use predetermined identifiers
    - not expected to know own domid (self identifier)
    - other hypervisor implementations on VSock use pre-known IDs

ie. raises: should addressing be based on knowing the remote domid

VSock likely will not be the interface to use for communicating from userspace
to domain kernel in support of the VirtIO-Argo transport backend.

Forward direction: Argo Linux driver to be built modularly, similar to the uXen
v4v driver, with a library core (a misc driver with ring and interrupt handling
logic, etc) plus separate drivers that export different interfaces to userspace
for access.

### Available Linux Argo/v4v device drivers:
uXen source code is available
- includes an implementation of the v4v Linux driver
https://github.com/uxen-virt/uxen/tree/ascara/vm-support/linux
- current OpenXT Argo Linux driver and exploration of a VSock Argo driver:
https://github.com/OpenXT/linux-xen-argo

#### Projects on the VirtIO-Argo development wiki page:
* Project: unification of the v4v and Argo interfaces
https://openxt.atlassian.net/wiki/spaces/DC/pages/1696169985/VirtIO-Argo+Development+Phase+1#Project:-unification-of-the-v4v-and-Argo-interfaces
* Project: Port the v4v Windows device driver to Argo
https://openxt.atlassian.net/wiki/spaces/DC/pages/1696169985/VirtIO-Argo+Development+Phase+1#Project:-Port-the-v4v-Windows-device-driver-to-Argo
* Comparison of VM/guest Argo interface options
https://openxt.atlassian.net/wiki/spaces/DC/pages/1696169985/VirtIO-Argo+Development+Phase+1#Comparison-of-VM/guest-Argo-interface-options

--------------------------------------------------------------------------------
## Reference: "VirtIO-Argo initial development" thread:
  https://groups.google.com/g/openxt/c/yKR5JFOSmTc?pli=1

Attachment: Argo-HMX-Transport-for-VirtIO.pdf
Description: Adobe PDF document


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.