[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v1 14/21] Ovmf/Xen: allow non-PCI usage of XenBusDxe



On 01/23/15 16:03, Ard Biesheuvel wrote:
> While Xen on Intel uses a virtual PCI device to communicate the
> base address of the grant table, the ARM implementation uses a DT
> node, which is fundamentally incompatible with the way XenBusDxe is
> implemented, i.e., as a UEFI Driver Model implementation for a PCI
> device.
> 
> To allow the non-PCI implementations to use this driver anyway, this
> patch introduces an abstract XENIO_PROTOCOL protocol, which contains
> just the grant table base address. The Intel implementation is adapted
> to allocate such a protocol on the fly based on the PCI config space
> metadata, so it operates as before. Other users can invoke the driver
> by installing a XENIO_PROTOCOL instance on a handle, and invoking
> ConnectController()
> 
> Contributed-under: TianoCore Contribution Agreement 1.0
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@xxxxxxxxxx>
> ---
>  OvmfPkg/Include/Protocol/XenIo.h  |  48 ++++++++++
>  OvmfPkg/OvmfPkg.dec               |   1 +
>  OvmfPkg/XenBusDxe/ComponentName.c |   2 +-
>  OvmfPkg/XenBusDxe/GrantTable.c    |   5 +-
>  OvmfPkg/XenBusDxe/GrantTable.h    |   3 +-
>  OvmfPkg/XenBusDxe/XenBus.c        |   6 +-
>  OvmfPkg/XenBusDxe/XenBusDxe.c     | 190 
> ++++++++++++++++++++++++++++++++------
>  OvmfPkg/XenBusDxe/XenBusDxe.h     |   2 +
>  OvmfPkg/XenBusDxe/XenBusDxe.inf   |   1 +
>  9 files changed, 220 insertions(+), 38 deletions(-)
>  create mode 100644 OvmfPkg/Include/Protocol/XenIo.h

This is what we have with virtio:

  EFI_BLOCK_IO_PROTOCOL                             EFI_SIMPLE_NETWORK_PROTOCOL
  [OvmfPkg/VirtioBlkDxe]                              [OvmfPkg/VirtioNetDxe]
             |                                                   |
             |         EFI_EXT_SCSI_PASS_THRU_PROTOCOL           |
             |             [OvmfPkg/VirtioScsiDxe]               |
             |                        |                          |
             +------------------------+--------------------------+
                                      |
                           VIRTIO_DEVICE_PROTOCOL
                                      |
                +---------------------+---------------------+
                |                                           |
  [OvmfPkg/VirtioPciDeviceDxe]                  [custom platform drivers]
                |                                           |
                |                                           |
       EFI_PCI_IO_PROTOCOL                [OvmfPkg/Library/VirtioMmioDeviceLib]
 [MdeModulePkg/Bus/Pci/PciBusDxe]              direct MMIO register access


And this is what we should have for Xen, I think:


                             EFI_BLOCK_IO_PROTOCOL
                             [OvmfPkg/XenPvBlkDxe]
                                      |
                                XENBUS_PROTOCOL
                              [OvmfPkg/XenBusDxe]
                                      |
                                XENIO_PROTOCOL
                                      |
                +---------------------+-----------------------+
                |                                             |
     [OvmfPkg/XenIoPciDxe]              
[ArmPlatformPkg/ArmVirtualizationPkg/VirtFdtDxe]
                |                                             |
       EFI_PCI_IO_PROTOCOL             
[ArmPlatformPkg/ArmVirtualizationPkg/XenIoMmioLib]
 [MdeModulePkg/Bus/Pci/PciBusDxe]


XENIO_PROTOCOL should abstract both of the following:
- hypercall implementation
- grant table base address

A new driver, OvmfPkg/XenIoPciDxe, would take over the "bottom half" of
the current OvmfPkg/XenBusDxe driver (discovery, hypercalls etc). It
would install the XENIO_PROTOCOL on the PCI device handle. (Meaning, it
would be a device driver.)

A new library, ArmPlatformPkg/ArmVirtualizationPkg/XenIoMmioLib, would
allow its caller, ArmPlatformPkg/ArmVirtualizationPkg/VirtFdtDxe, to
discover the grant table address in the DTB, and call it with the grant
table base address. The library would then create a new handle, with
instances of the XENIO_PROTOCOL and the EFI_DEVICE_PATH_PROTOCOL on it.
The XENIO_PROTOCOL would again abstract the hypercall implementation as
well.

OvmfPkg/XenBusDxe would preserve only its current "top half". All
hypercalls and all PciIo accesses would be rebased to XENIO_PROTOCOL
member calls. Ie. first you have to collect all those accesses and
distill the member functions of XENIO_PROTOCOL first.

This is probably the hardest part of the design. When Olivier did the
same for VIRTIO_DEVICE_PROTOCOL, he was certainly helped by the virtio
specification (and my original, PCI-only source code that carefully
referenced each step in the spec), which had formalized the virtio
operations in human language. This could be a good opportunity to force
the Xen guys to produce a similar document (a text file would suffice).

No changes for XenPvBlkDxe.

If you want, you can still factor out the hypercall implementation to a
separate library. Then use the Intel library instance with the
XenIoPciDxe driver, and the ARM library instance with VirtFdtDxe +
XenIoMmioLib. Then, once we have PCI in ArmVirtualizationPkg, you can
use XenIoPciDxe even there, just with the ARM-specific hypercall
library.

... I'm uncertain how much sense the above makes for Xen specifics, and
how much it overlaps with your current patchset, but if I understood
correctly, these layers were your main question, and I think we should
go with the above structure. It follows how virtio is split into layers,
and it puts the UEFI driver model to good use. (One difference is that
Xen has one more layers AIUI.)

If I understand the current patch right, looking at the
XenBusDxeDriverBindingSupported() hunk, it would make OvmfPkg/XenBusDxe
capable of using both PciIo and XenIo. I don't like that (nor do you, I
believe). :)

Thanks
Laszlo

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.