[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Enabling hypervisor agnosticism for VirtIO backends



Hi,

I have not covered all your comments below yet.
So just one comment:

On Mon, Sep 06, 2021 at 05:57:43PM -0700, Christopher Clark wrote:
> On Thu, Sep 2, 2021 at 12:19 AM AKASHI Takahiro <takahiro.akashi@xxxxxxxxxx>
> wrote:

(snip)

> >    It appears that, on FE side, at least three hypervisor calls (and data
> >    copying) need to be invoked at every request, right?
> >
> 
> For a write, counting FE sendv ops:
> 1: the write data payload is sent via the "Argo ring for writes"
> 2: the descriptor is sent via a sync of the available/descriptor ring
>   -- is there a third one that I am missing?

In the picture, I can see
a) Data transmitted by Argo sendv
b) Descriptor written after data sendv
c) VirtIO ring sync'd to back-end via separate sendv

Oops, (b) is not a hypervisor call, is it?
(But I guess that you will have to have yet another call for notification
since there is no config register of QueueNotify?)

Thanks,
-Takahiro Akashi


> Christopher
> 
> 
> >
> > Thanks,
> > -Takahiro Akashi
> >
> >
> > > * Here are the design documents for building VirtIO-over-Argo, to
> > support a
> > >   hypervisor-agnostic frontend VirtIO transport driver using Argo.
> > >
> > > The Development Plan to build VirtIO virtual device support over Argo
> > > transport:
> > >
> > https://openxt.atlassian.net/wiki/spaces/DC/pages/1696169985/VirtIO-Argo+Development+Phase+1
> > >
> > > A design for using VirtIO over Argo, describing how VirtIO data
> > structures
> > > and communication is handled over the Argo transport:
> > > https://openxt.atlassian.net/wiki/spaces/DC/pages/1348763698/VirtIO+Argo
> > >
> > > Diagram (from the above document) showing how VirtIO rings are
> > synchronized
> > > between domains without using shared memory:
> > >
> > https://openxt.atlassian.net/46e1c93b-2b87-4cb2-951e-abd4377a1194#media-blob-url=true&id=01f7d0e1-7686-4f0b-88e1-457c1d30df40&collection=contentId-1348763698&contextId=1348763698&mimeType=image%2Fpng&name=device-buffer-access-virtio-argo.png&size=243175&width=1106&height=1241
> > >
> > > Please note that the above design documents show that the existing VirtIO
> > > device drivers, and both vring and virtqueue data structures can be
> > > preserved
> > > while interdomain communication can be performed with no shared memory
> > > required
> > > for most drivers; (the exceptions where further design is required are
> > those
> > > such as virtual framebuffer devices where shared memory regions are
> > > intentionally
> > > added to the communication structure beyond the vrings and virtqueues).
> > >
> > > An analysis of VirtIO and Argo, informing the design:
> > >
> > https://openxt.atlassian.net/wiki/spaces/DC/pages/1333428225/Analysis+of+Argo+as+a+transport+medium+for+VirtIO
> > >
> > > * Argo can be used for a communication path for configuration between the
> > > backend
> > >   and the toolstack, avoiding the need for a dependency on XenStore,
> > which
> > > is an
> > >   advantage for any hypervisor-agnostic design. It is also amenable to a
> > > notification
> > >   mechanism that is not based on Xen event channels.
> > >
> > > * Argo does not use or require shared memory between VMs and provides an
> > > alternative
> > >   to the use of foreign shared memory mappings. It avoids some of the
> > > complexities
> > >   involved with using grants (eg. XSA-300).
> > >
> > > * Argo supports Mandatory Access Control by the hypervisor, satisfying a
> > > common
> > >   certification requirement.
> > >
> > > * The Argo headers are BSD-licensed and the Xen hypervisor implementation
> > > is GPLv2 but
> > >   accessible via the hypercall interface. The licensing should not
> > present
> > > an obstacle
> > >   to adoption of Argo in guest software or implementation by other
> > > hypervisors.
> > >
> > > * Since the interface that Argo presents to a guest VM is similar to
> > DMA, a
> > > VirtIO-Argo
> > >   frontend transport driver should be able to operate with a physical
> > > VirtIO-enabled
> > >   smart-NIC if the toolstack and an Argo-aware backend provide support.
> > >
> > > The next Xen Community Call is next week and I would be happy to answer
> > > questions
> > > about Argo and on this topic. I will also be following this thread.
> > >
> > > Christopher
> > > (Argo maintainer, Xen Community)
> > >
> > >
> > --------------------------------------------------------------------------------
> > > [1]
> > > An introduction to Argo:
> > >
> > https://static.sched.com/hosted_files/xensummit19/92/Argo%20and%20HMX%20-%20OpenXT%20-%20Christopher%20Clark%20-%20Xen%20Summit%202019.pdf
> > > https://www.youtube.com/watch?v=cnC0Tg3jqJQ
> > > Xen Wiki page for Argo:
> > >
> > https://wiki.xenproject.org/wiki/Argo:_Hypervisor-Mediated_Exchange_(HMX)_for_Xen
> > >
> > > [2]
> > > OpenXT Linux Argo driver and userspace library:
> > > https://github.com/openxt/linux-xen-argo
> > >
> > > Windows V4V at OpenXT wiki:
> > > https://openxt.atlassian.net/wiki/spaces/DC/pages/14844007/V4V
> > > Windows v4v driver source:
> > > https://github.com/OpenXT/xc-windows/tree/master/xenv4v
> > >
> > > HP/Bromium uXen V4V driver:
> > > https://github.com/uxen-virt/uxen/tree/ascara/windows/uxenv4vlib
> > >
> > > [3]
> > > v2 of the Argo test unikernel for XTF:
> > >
> > https://lists.xenproject.org/archives/html/xen-devel/2021-01/msg02234.html
> > >
> > > [4]
> > > Argo HMX Transport for VirtIO meeting minutes:
> > >
> > https://lists.xenproject.org/archives/html/xen-devel/2021-02/msg01422.html
> > >
> > > VirtIO-Argo Development wiki page:
> > >
> > https://openxt.atlassian.net/wiki/spaces/DC/pages/1696169985/VirtIO-Argo+Development+Phase+1
> > >
> >
> >



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.