[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] PCI Passthrough ARM Design : Draft1



On Mon, 2015-06-08 at 00:52 -0700, Manish Jaggi wrote:

Thanks, the general shape of this is looking good.

It'd be a lot easier to read if you could arrange not to mangle the
whitespaced/wrapping when sending though.

> PCI Pass-through in Xen ARM
> --------------------------
> 
> Index
> 1. Background
> 2. Basic PCI Support in Xen ARM
> 2.1 pci_hostbridge and pci_hostbridge_ops
> 2.2 PHYSDEVOP_pci_host_bridge_add hypercall
> 3. Dom0 Access PCI devices
> 4. DomU assignment of PCI device
> 5. NUMA and PCI passthrough
> 6. DomU pci device attach flow
> 
> 1. Background of PCI passthrough
> --------------------------------
[...]
> 2. Basic PCI Support for ARM
> ----------------------------
[...]
> 3. Dom0 access PCI device
> ---------------------------------
> As per the design of xen hypervisor, dom0 enumerates the PCI devices. 
> For each
> device the MMIO space has to be mapped in the Stage2 translation for 
> dom0. For
> dom0 xen maps the ranges in pci nodes in stage 2 translation.

Currently this is done by mapping the entire PCI window to dom0, not
just the regions referenced by a specific device BAR. This could be done
by the host controller driver I think.

I don't think we need to go to the effort of going into each device's
PCI cfg space and reading its BARs etc, do we?

This section deal with the routing of PCI INTx interrupts (mapped to
SPIs) as well as talking about MSIs.

> 4. DomU access / assignment PCI device
> --------------------------------------
> When a device is attached to a domU, provision has to be made such that 
> it can
> access the MMIO space of the device and xen is able to identify the mapping
> between guest bdf and system bdf. Two hypercalls are introduced

I don't think we want/need new hypercalls here, the same existing
hypercalls which are used on x86 should be suitable. That's
XEN_DOMCTL_memory_mapping from the toolstack I think.

> Xen adds the mmio space to the stage2 translation for domU. The 
> restrction is
> that xen creates 1:1 mapping of the MMIO address.

I don't think we need/want this restriction. We can define some
region(s) of guest memory to be an MMIO hole (by adding them to to the
memory map in public/arch-arm.h).

If there is a reason for this restriction/trade off then it should be
spelled out as part of the design document, as should other such design
decisions (which would include explaining where this differs from how
things work for x86 why they must differ).

> #define PHYSDEVOP_map_sbdf              43

Isn't this just XEN_DOMCTL_assign_device?

> Change in PCI ForntEnd - backend driver for MSI/X programming
> -------------------------------------------------------------
> On the Pci frontend bus a msi-parent as gicv3-its is added. As there is 
> a single
> virtual its for a domU, as there is only a single virtual pci bus in 
> domU. This
> ensures that the config_msi calls are handled by the gicv3 its driver in 
> domU
> kernel and not utilizing frontend-backend communication between dom0-domU.

OK.

> 5. NUMA domU and vITS
> -----------------------------
> a) On NUMA systems domU still have a single its node.
> b) How can xen identify the ITS on which a device is connected.
> - Using segment number query using api which gives pci host controllers
> device node
> 
> struct dt_device_node* pci_hostbridge_dt_node(uint32_t segno)
> 
> c) Query the interrupt parent of the pci device node to find out the its.

Yes, I think that can work.

> 6. DomU Bootup flow
> ---------------------
> a. DomU boots up without any pci devices assigned. A daemon listens to 
> events
> from the xenstore.

Which daemon? Where does it live? 

>  When a device is attached to domU, the frontend pci 
> bus driver
> starts enumerating the devices.Front end driver communicates with 
> backend driver
> in dom0 to read the pci config space.

"backend driver" here == xen-pciback.ko or something else? Does it
differ from the daemon referred to above? Does it use the existing
pciif.h protocol (I hope so).

We do not have to use xen-pciback for everything (e.g. ITS and
interrupts generally seem like a reasonable place to differ) but for
things which pciback does we should in general prefer to use it.

I'd prefer to avoid the need for a separate daemon if possible.

> b. Device driver of the specific pci device invokes methods to configure 
> the
> msi/x interrupt which are handled by the its driver in domU kernel. The 
> read/writes
> by the its driver are trapped in xen. ITS driver finds out the actual 
> sbdf based
> on the map_sbdf hypercall information.

Don't forget to also consider PCI INTx interrupts.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.