[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Enabling hypervisor agnosticism for VirtIO backends





On Thu, Sep 2, 2021 at 12:19 AM AKASHI Takahiro <takahiro.akashi@xxxxxxxxxx> wrote:
Hi Christopher,

Thank you for your feedback.

On Mon, Aug 30, 2021 at 12:53:00PM -0700, Christopher Clark wrote:
> [ resending message to ensure delivery to the CCd mailing lists
> post-subscription ]
>
> Apologies for being late to this thread, but I hope to be able to
> contribute to
> this discussion in a meaningful way. I am grateful for the level of
> interest in
> this topic. I would like to draw your attention to Argo as a suitable
> technology for development of VirtIO's hypervisor-agnostic interfaces.
>
> * Argo is an interdomain communication mechanism in Xen (on x86 and Arm)
> that
>   can send and receive hypervisor-mediated notifications and messages
> between
>   domains (VMs). [1] The hypervisor can enforce Mandatory Access Control
> over
>   all communication between domains. It is derived from the earlier v4v,
> which
>   has been deployed on millions of machines with the HP/Bromium uXen
> hypervisor
>   and with OpenXT.
>
> * Argo has a simple interface with a small number of operations that was
>   designed for ease of integration into OS primitives on both Linux
> (sockets)
>   and Windows (ReadFile/WriteFile) [2].
>     - A unikernel example of using it has also been developed for XTF. [3]
>
> * There has been recent discussion and support in the Xen community for
> making
>   revisions to the Argo interface to make it hypervisor-agnostic, and
> support
>   implementations of Argo on other hypervisors. This will enable a single
>   interface for an OS kernel binary to use for inter-VM communication that
> will
>   work on multiple hypervisors -- this applies equally to both backends and
>   frontend implementations. [4]

Regarding virtio-over-Argo, let me ask a few questions:
(In figure "Virtual device buffer access:Virtio+Argo" in [4])

(for ref, this diagram is from this document:
 https://openxt.atlassian.net/wiki/spaces/DC/pages/1348763698 )

Takahiro, thanks for reading the Virtio-Argo materials.

Some relevant context before answering your questions below: the Argo request
interface from the hypervisor to a guest, which is currently exposed only via a
dedicated hypercall op, has been discussed within the Xen community and is open
to being changed in order to better enable support for guest VM access to Argo
functions in a hypervisor-agnostic way.

The proposal is to allow hypervisors the option to implement and expose any of
multiple access mechanisms for Argo, and then enable a guest device driver to
probe the hypervisor for methods that it is aware of and able to use. The
hypercall op is likely to be retained (in some form), and complemented at least
on x86 with another interface via MSRs presented to the guests.

 
1) How the configuration is managed?
   On either virtio-mmio or virtio-pci, there always takes place
   some negotiation between the FE and BE through the "configuration"
   space. How can this be done in virtio-over-Argo?

Just to be clear about my understanding: your question, in the context of a
Linux kernel virtio device driver implementation, is about how a virtio-argo
transport driver would implement the get_features function of the
virtio_config_ops, as a parallel to the work that vp_get_features does for
virtio-pci, and vm_get_features does for virtio-mmio.

The design is still open on this and options have been discussed, including:

* an extension to Argo to allow the system toolstack (which is responsible for
  managing guest VMs and enabling connections from front-to-backends)
  to manage a table of "implicit destinations", so a guest can transmit Argo
  messages to eg. "my storage service" port and the hypervisor will deliver it
  based on a destination table pre-programmed by the toolstack for the VM.
  [1]
     - ref: Notes from the December 2019 Xen F2F meeting in Cambridge, UK:
       [1] https://lists.archive.carbon60.com/xen/devel/577800#577800

  So within that feature negotiation function, communication with the backend
  via that Argo channel will occur.

* IOREQ
The Xen IOREQ implementation is not currently appropriate for virtio-argo since
it requires the use of foreign memory mappings of frontend memory in the backend
guest. However, a new HMX interface from the hypervisor could support a new DMA
Device Model Op to allow the backend to request the hypervisor to retrieve specified
bytes from the frontend guest, which would enable plumbing for device configuration
between an IOREQ server (device model backend implementation) and the guest driver.
[2]

Feature negotiation in the front end in this case would look very similar to
the virtio-mmio implementation.

ref: Argo HMX Transport for VirtIO meeting minutes, from January 2021:
[2] https://lists.xenproject.org/archives/html/xen-devel/2021-02/msg01422.html

* guest ACPI tables that surface the address of a remote Argo endpoint
  on behalf of the toolstack, and Argo communication can then negotiate features

* emulation of a basic PCI device by the hypervisor (though details not determined)

 
2) Do there physically exist virtio's available/used vrings as well as
   descriptors, or are they virtually emulated over Argo (rings)?

In short: the latter.

In the analysis that I did when looking at this, my observation was that each
side (front and backend) should be able to accurately maintain their own local
copy of the available/used vrings as well as descriptors, and both be kept
synchronized by ensuring that updates are transmitted to the other side when
they are written to. eg. As part of this, in the Linux front end implementation
the virtqueue_notify function uses a function pointer in the virtqueue that is
populated by the transport driver, ie. the virtio-argo driver in this case,
which can implement the necessary logic to coordinate with the backend.
 
3) The payload in a request will be copied into the receiver's Argo ring.
   What does the address in a descriptor mean?
   Address/offset in a ring buffer?

Effectively yes. I would treat it as a handle that is used to identify and
retrieve data from messages exchanged between frontend transport driver and
the backend via Argo rings established for moving data for the data path.
In the diagram, those are "Argo ring for reads" and "Argo ring for writes".
 
4) Estimate of performance or latency?

Different access methods to Argo (ie. related to my answer to your question '1)'
above --) will have different performance characteristics.

Data copying will necessarily involved for any Hypervisor-Mediated data eXchange
(HMX) mechanism[1], such as Argo, where there is no shared memory between guest
VMs, but the performance profile on modern CPUs with sizable caches has been
demonstrated to be acceptable for the guest virtual device drivers use case in
the HP/Bromium vSentry uXen product. The VirtIO structure is somewhat different
though.

Further performance profiling and measurement will be valuable for enabling
tuning of the implementation and development of additional interfaces (eg. such
as an asynchronous send primitive) - some of this has been discussed and
described on the VirtIO-Argo-Development-Phase-1 wiki page[2].

[1]
https://wiki.xenproject.org/wiki/Argo:_Hypervisor-Mediated_Exchange_(HMX)_for_Xen

[2]
https://openxt.atlassian.net/wiki/spaces/DC/pages/1696169985/VirtIO-Argo+Development%3A+Phase+1
 
   It appears that, on FE side, at least three hypervisor calls (and data
   copying) need to be invoked at every request, right?

For a write, counting FE sendv ops:
1: the write data payload is sent via the "Argo ring for writes"
2: the descriptor is sent via a sync of the available/descriptor ring
  -- is there a third one that I am missing?

Christopher
 

Thanks,
-Takahiro Akashi


> * Here are the design documents for building VirtIO-over-Argo, to support a
>   hypervisor-agnostic frontend VirtIO transport driver using Argo.
>
> The Development Plan to build VirtIO virtual device support over Argo
> transport:
> https://openxt.atlassian.net/wiki/spaces/DC/pages/1696169985/VirtIO-Argo+Development+Phase+1
>
> A design for using VirtIO over Argo, describing how VirtIO data structures
> and communication is handled over the Argo transport:
> https://openxt.atlassian.net/wiki/spaces/DC/pages/1348763698/VirtIO+Argo
>
> Diagram (from the above document) showing how VirtIO rings are synchronized
> between domains without using shared memory:
> https://openxt.atlassian.net/46e1c93b-2b87-4cb2-951e-abd4377a1194#media-blob-url="">
>
> Please note that the above design documents show that the existing VirtIO
> device drivers, and both vring and virtqueue data structures can be
> preserved
> while interdomain communication can be performed with no shared memory
> required
> for most drivers; (the exceptions where further design is required are those
> such as virtual framebuffer devices where shared memory regions are
> intentionally
> added to the communication structure beyond the vrings and virtqueues).
>
> An analysis of VirtIO and Argo, informing the design:
>
https://openxt.atlassian.net/wiki/spaces/DC/pages/1333428225/Analysis+of+Argo+as+a+transport+medium+for+VirtIO
>
> * Argo can be used for a communication path for configuration between the
> backend
>   and the toolstack, avoiding the need for a dependency on XenStore, which
> is an
>   advantage for any hypervisor-agnostic design. It is also amenable to a
> notification
>   mechanism that is not based on Xen event channels.
>
> * Argo does not use or require shared memory between VMs and provides an
> alternative
>   to the use of foreign shared memory mappings. It avoids some of the
> complexities
>   involved with using grants (eg. XSA-300).
>
> * Argo supports Mandatory Access Control by the hypervisor, satisfying a
> common
>   certification requirement.
>
> * The Argo headers are BSD-licensed and the Xen hypervisor implementation
> is GPLv2 but
>   accessible via the hypercall interface. The licensing should not present
> an obstacle
>   to adoption of Argo in guest software or implementation by other
> hypervisors.
>
> * Since the interface that Argo presents to a guest VM is similar to DMA, a
> VirtIO-Argo
>   frontend transport driver should be able to operate with a physical
> VirtIO-enabled
>   smart-NIC if the toolstack and an Argo-aware backend provide support.
>
> The next Xen Community Call is next week and I would be happy to answer
> questions
> about Argo and on this topic. I will also be following this thread.
>
> Christopher
> (Argo maintainer, Xen Community)
>
> --------------------------------------------------------------------------------
> [1]
> An introduction to Argo:
> https://static.sched.com/hosted_files/xensummit19/92/Argo%20and%20HMX%20-%20OpenXT%20-%20Christopher%20Clark%20-%20Xen%20Summit%202019.pdf
> https://www.youtube.com/watch?v=cnC0Tg3jqJQ
> Xen Wiki page for Argo:
> https://wiki.xenproject.org/wiki/Argo:_Hypervisor-Mediated_Exchange_(HMX)_for_Xen
>
> [2]
> OpenXT Linux Argo driver and userspace library:
> https://github.com/openxt/linux-xen-argo
>
> Windows V4V at OpenXT wiki:
> https://openxt.atlassian.net/wiki/spaces/DC/pages/14844007/V4V
> Windows v4v driver source:
> https://github.com/OpenXT/xc-windows/tree/master/xenv4v
>
> HP/Bromium uXen V4V driver:
> https://github.com/uxen-virt/uxen/tree/ascara/windows/uxenv4vlib
>
> [3]
> v2 of the Argo test unikernel for XTF:
> https://lists.xenproject.org/archives/html/xen-devel/2021-01/msg02234.html
>
> [4]
> Argo HMX Transport for VirtIO meeting minutes:
> https://lists.xenproject.org/archives/html/xen-devel/2021-02/msg01422.html
>
> VirtIO-Argo Development wiki page:
> https://openxt.atlassian.net/wiki/spaces/DC/pages/1696169985/VirtIO-Argo+Development+Phase+1
>


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.