[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH V7 1/2] libxl: Add support for Virtio disk configuration



On 25.04.22 14:02, Oleksandr wrote:

On 25.04.22 10:43, Juergen Gross wrote:


Hello Juergen


Thank you for the feedback.

On 08.04.22 20:21, Oleksandr Tyshchenko wrote:
From: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>

This patch adds basic support for configuring and assisting virtio-mmio
based virtio-disk backend (emualator) which is intended to run out of
Qemu and could be run in any domain.
Although the Virtio block device is quite different from traditional
Xen PV block device (vbd) from the toolstack's point of view:
  - as the frontend is virtio-blk which is not a Xenbus driver, nothing
    written to Xenstore are fetched by the frontend (the vdev is not
    passed to the frontend)

I thought about the future support on x86.

There we don't have a device tree (and I don't want to introduce it),
so the only ways to specify the backend domain id would be to:

- add some information to ACPI tables
- use boot parameters
- use Xenstore

I understand that, and agree


Thinking further of hotplugging virtio devices, Xenstore seems to be the
only real suitable alternative. Using virtio mechanisms doesn't seem
appropriate, as such information should be retrieved in "platform
specific" ways (see e.g. specifying an "endpoint" in the virtio IOMMU
device [1], [2]). I think the Xenstore information for that purpose
could be rather minimal and it should be device-type agnostic. Having
just a directory with endpoints and associated backend domids would
probably be enough (not needed in this series, of course).

Just to make it clear, we are speaking about the possible ways to communicate backend domid for another series [1], so about the x86's alternative of device-tree bindings "xen,dev-domid" [2]. I was thinking we could avoid using Xenstore at the guest side for that purpose, but I didn't think about hotplug... I assume, all Xenstore bits wouldn't go the outside Xen grant DMA-mapping layer (grant-dma-ops.c)?

I think it would be another driver under drivers/xen/ without the need to
touch any other frontend related file or Xen-related architecture specific
code.

And with the hotplug option in mind I start to feel unueasy with naming
the new Xenstore node "protocol", as the frontend disk nodes for "normal"
disks already have a "protocol" entry specifying 64- or 32-bit protocol.


I noticed the "protocol" node at the frontend side for traditional Xen PV block device which handles yet another purpose, but I didn't think much about it since the new "protocol" node in only for the backend's use. If we start thinking of frontend's Xenstore nodes, then yes, will clash...



Maybe we should really name it "transport" instead?

... For me the "transport" name is associated with virtio transports: mmio, pci, ccw. But, I would be ok with that name. Another possible name could be "specification".

Yeah, looking at the virtio spec this makes sense.

So I would be fine with "specification".

- Add new device kind (LIBXL__DEVICE_KIND_VIRTIO_DISK) as current
   one (LIBXL__DEVICE_KIND_VBD) doesn't fit into Virtio disk model

An example of domain configuration for Virtio disk:
disk = [ 'phy:/dev/mmcblk0p3, xvda1, backendtype=other, protocol=virtio-mmio']

With Roger's feedback this would then be "transport=virtio", the "mmio"
part should then be something like "adapter=mmio" (in contrast to
"adapter=pci"), and "adapter" only needed in case of a device tree and
PCI being available.

ok, will rename. Can we add "adapter" (or whenever the name would be) option later, when there is a real need? For now, I mean within the current series which adds only virtio-mmio bits on Arm, we can assume that "transport=virtio" implies using virtio-mmio.

Yes, we should add it only when needed.


BTW, if we named the main option "specification", the secondary option "transport" would good fit from my PoV.
For example:
disk = [ 'phy:/dev/mmcblk0p3, xvda1, backendtype=other, specification=virtio, transport=mmio']

Fine with me.


Juergen

Attachment: OpenPGP_0xB0DE9DD628BF132F.asc
Description: OpenPGP public key

Attachment: OpenPGP_signature
Description: OpenPGP digital signature


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.