[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH V6 1/2] libxl: Add support for Virtio disk configuration



On 15.12.21 16:02, Oleksandr wrote:

On 15.12.21 08:08, Juergen Gross wrote:

Hi Juergen

On 14.12.21 18:44, Oleksandr wrote:

On 14.12.21 18:03, Anthony PERARD wrote:

Hi Anthony


On Wed, Dec 08, 2021 at 06:59:43PM +0200, Oleksandr Tyshchenko wrote:
From: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>

This patch adds basic support for configuring and assisting virtio-disk
backend (emualator) which is intended to run out of Qemu and could be
run in any domain.
Although the Virtio block device is quite different from traditional
Xen PV block device (vbd) from the toolstack point of view:
  - as the frontend is virtio-blk which is not a Xenbus driver, nothing
    written to Xenstore are fetched by the frontend (the vdev is not
    passed to the frontend)
  - the ring-ref/event-channel are not used for the backend<->frontend
    communication, the proposed IPC for Virtio is IOREQ/DM
it is still a "block device" and ought to be integrated in existing
"disk" handling. So, re-use (and adapt) "disk" parsing/configuration
logic to deal with Virtio devices as well.
How backend are intended to be created? Is there something listening on
xenstore?

You mention QEMU as been the backend, do you intend to have QEMU
listening on xenstore to create a virtio backend? Or maybe it is on the
command line? There is QMP as well, but it's probably a lot more
complicated as I think libxl needs refactoring for that.


No, QEMU is not involved there. The backend is a standalone application,
it is launched from the command line. The backend reads the Xenstore to get the configuration and to detect when guest with the frontend is created/destroyed.

I think this should be reflected somehow in the configuration, as I
expect qemu might gain this functionality in the future.

I understand this and agree in general (however I am wondering whether this can be postponed until it is actually needed), but ...

This might lead to the need to support some "legacy" options in future.
I think we should at least think whether these scheme will cover (or
prohibit) extensions which are already on the horizon.

I'm wondering whether we shouldn't split the backend from the protocol
(or specification?). Something like "protocol=virtio" (default would be
e.g. "xen") and then you could add "backend=external" for your use case?

... I am afraid, I didn't get the idea. Are we speaking about the (new?) disk configuration options here or these are not disk specific things at all and to be applicable for all possible backends?

I was talking of a general approach using the disk as an example. For
disks it is just rather obvious.

If the former, then could the new backendtype simply do the job? For example, "backendtype=virtio_external" for our current use-case and "backendtype=virtio_qemu"
for the possible future use-cases? Could you please clarify the idea.

I want to avoid overloading the backendtype with information which is
in general not really related by the backend. You can have a qemu based
qdisk backend serving a Xen PV-disk (like today) or a virtio disk.

A similar approach has been chosen for the disk format: it is not part
of the backend, but a parameter of its own. This way e.g. the qdisk
backend can use the original qdisk format, or the qcow format.

In practice we are having something like the "protocol" already today:
the disk device name is encoding that ("xvd*" is a Xen PV disk, while
"sd*" is an emulated SCSI disk, which happens to be presented to the
guest as "xvd*", too). And this is an additional information not
related to the backendtype.

So we have basically the following configuration items, which are
orthogonal to each other (some combinations might not make sense,
but in theory most would be possible):

1. protocol: emulated (not PV), Xen (like today), virtio

2. backendtype: phy (blkback), qdisk (qemu), other (e.g. a daemon)

3. format: raw, qcow, qcow2, vhd, qed

The combination virtio+phy would be equivalent to vhost, BTW. And
virtio+other might even use vhost-user, depending on the daemon.


Juergen

Attachment: OpenPGP_0xB0DE9DD628BF132F.asc
Description: OpenPGP public key

Attachment: OpenPGP_signature
Description: OpenPGP digital signature


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.