[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Enabling hypervisor agnosticism for VirtIO backends



On Mon, 6 Sep 2021, AKASHI Takahiro wrote:
> > the second is how many context switches are involved in a transaction.
> > Of course with all things there is a trade off. Things involving the
> > very tightest latency would probably opt for a bare metal backend which
> > I think would imply hypervisor knowledge in the backend binary.
> 
> In configuration phase of virtio device, the latency won't be a big matter.
> In device operations (i.e. read/write to block devices), if we can
> resolve 'mmap' issue, as Oleksandr is proposing right now, the only issue is
> how efficiently we can deliver notification to the opposite side. Right?
> And this is a very common problem whatever approach we would take.
> 
> Anyhow, if we do care the latency in my approach, most of virtio-proxy-
> related code can be re-implemented just as a stub (or shim?) library
> since the protocols are defined as RPCs.
> In this case, however, we would lose the benefit of providing "single binary"
> BE.
> (I know this is is an arguable requirement, though.)

In my experience, latency, performance, and security are far more
important than providing a single binary.

In my opinion, we should optimize for the best performance and security,
then be practical on the topic of hypervisor agnosticism. For instance,
a shared source with a small hypervisor-specific component, with one
implementation of the small component for each hypervisor, would provide
a good enough hypervisor abstraction. It is good to be hypervisor
agnostic, but I wouldn't go extra lengths to have a single binary. I
cannot picture a case where a BE binary needs to be moved between
different hypervisors and a recompilation is impossible (BE, not FE).
Instead, I can definitely imagine detailed requirements on IRQ latency
having to be lower than 10us or bandwidth higher than 500 MB/sec.

Instead of virtio-proxy, my suggestion is to work together on a common
project and common source with others interested in the same problem.

I would pick something like kvmtool as a basis. It doesn't have to be
kvmtools, and kvmtools specifically is GPL-licensed, which is
unfortunate because it would help if the license was BSD-style for ease
of integration with Zephyr and other RTOSes.

As long as the project is open to working together on multiple
hypervisors and deployment models then it is fine. For instance, the
shared source could be based on OpenAMP kvmtool [1] (the original
kvmtool likely prefers to stay small and narrow-focused on KVM). OpenAMP
kvmtool was created to add support for hypervisor-less virtio but they
are very open to hypervisors too. It could be a good place to add a Xen
implementation, a KVM fatqueue implementation, a Jailhouse
implementation, etc. -- work together toward the common goal of a single
BE source (not binary) supporting multiple different deployment models.


[1] https://github.com/OpenAMP/kvmtool



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.