[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [early RFC] ARM PCI Passthrough design document




Hi Edgar,

On 22/02/17 04:03, Edgar E. Iglesias wrote:
On Mon, Feb 13, 2017 at 03:35:19PM +0000, Julien Grall wrote:
On 02/02/17 15:33, Edgar E. Iglesias wrote:
On Wed, Feb 01, 2017 at 07:04:43PM +0000, Julien Grall wrote:
On 31/01/2017 19:06, Edgar E. Iglesias wrote:
On Tue, Jan 31, 2017 at 05:09:53PM +0000, Julien Grall wrote:
I'll see if I can find working examples for PCIe on the ZCU102. Then I'll share
DTS, Kernel etc.

I've found a device tree on the github from the ZCU102: zynqmp-zcu102.dts,
it looks like there is no use of PHY for the pcie so far.

Lets imagine in the future, pcie will use the PHY. If we decide to
initialize the hostbridge in Xen, we would also have to pull the PHY code in
the hypervisor. Leaving aside the problem to pull more code in Xen, this is
not nice because the PHY is used by different components (e.g SATA, USB). So
Xen and DOM0 would have to share the PHY.

For Xen POV, the best solution would be the bootloader initializing the PHY
because starting Xen. So we can keep all the hostbridge (initialization +
access) in Xen.

If it is not possible, then I would prefer to see the hostbridge
initialization in DOM0.


I suspect that this setup has previously been done by the initial bootloader
auto-generated from design configuration tools.

Now, this is moving into Linux.

Do you know why they decide to move the code in Linux? What would be the
problem to let the bootloader configuring the GT?


No, I'm not sure why this approach was not used. The only thing I can think of
is a runtime configuration approach.



There's a specific driver that does that but AFAICS, it has not been upstreamed 
yet.
You can see it here:
https://github.com/Xilinx/linux-xlnx/blob/master/drivers/phy/phy-zynqmp.c

DTS nodes that need a PHY can then just refer to it, here's an example from 
SATA:
&sata {
       phy-names = "sata-phy";
       phys = <&lane3 PHY_TYPE_SATA 1 3 150000000>;
};

Yes, I agree that the GT setup in the bootloader is very attractive.
I don't think hte setup sequence is complicated, we can perhaps even do it
on the commandline in u-boot or xsdb. I'll have to check.

That might simplify things for Xen. I would be happy to consider any other solutions. It might probably be worth to kick a separate thread regarding how to support Xilinx hostcontroller in Xen.

For now, I will explain in the design document the different situation we can encounter with an hostbridge and will leave open the design for initialization bits.


[...]


>From a design point of view, it would make more sense to have the MSI
controller driver in Xen as the hostbridge emulation for guest will also
live there.

So if we receive MSI in Xen, we need to figure out a way for DOM0 and guest
to receive MSI. The same way would be the best, and I guess non-PV if
possible. I know you are looking to boot unmodified OS in a VM. This would
mean we need to emulate the MSI controller and potentially xilinx PCI
controller. How much are you willing to modify the OS?

Today, we have not yet implemented PCIe drivers for our baremetal SDK. So
things are very open and we could design with pretty much anything in mind.

Yes, we could perhaps include a very small model with most registers dummied.
Implementing the MSI read FIFO would allow us to:

1. Inject the MSI doorbell SPI into guests. The guest will then see the same
  IRQ as on real HW.

2. Guest reads host-controller registers (MSI FIFO) to get the signaled MSI.

The Xilinx PCIe hostbridge is not the only hostbridge having MSI controller
embedded. So I would like to see a generic solution if possible. This would
avoid to increase the code required for emulation in Xen.

My concern with a FIFO is it will require an upper bound to avoid using to
much memory in Xen. What if the FIFO is full? Will you drop MSI?

The FIFO I'm refering to is a FIFO in the MSI controller itself.

Sorry if it was unclear. I was trying to explain what would be the issue to emulate this kind of MSI controller in Xen not using them in Xen.

I agree that this wouldn't be generic though....

An idea would be to emulate a GICv2m frame (see appendix E in ARM-DEN-0029 v3.0) for the guest. The frame is able to handle a certain number of SPIs. Each MSI will be presented as a uniq SPI. The association SPI <-> MSI is left at the discretion of the driver.

A guest will discover the number of SPIs by reading the register MSI_TYPER. To initialize MSI, the guest will compose the message using the GICv2m doorbell (see register MSI_SETSPI_NS in the frame) and the SPI allocated. As the PCI hostbridge will be emulated for the guest, any write to the MSI space would be trapped. Then, I would expect Xen to allocate an host MSI, compose a new message using the doorbell of the Xilinx MSI controller and then write into the host PCI configuration space.

MSI will be received by the hypervisor that will look-up for the domain where it needs to be injected and will inject the SPI configured by the Xen.

The frame is always 4KB and the msi is embedded in it. This means we cannot map the virtual GICv2m MSI doorbell into the Xilinx MSI doorbell. The problem will also happen when using virtual ITS because a guest may have devices assigned using different physical ITS. However each ITS has it's own doorbell, therefore we would have to map all the ITS doorbell in the guest as we may not know which ITS will be used for hotplug devices.

To solve this problem, I would suggest to have a reserved range in the guest address space to map MSI doorbell.

This solution is the most generic I have in mind. The driver for the guest is very simple and the amount of emulation required is quite limited. Any opinions?

I am also open to any other suggestions.

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.