[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [early RFC] ARM PCI Passthrough design document



Hi Edgar,

Thank you for the feedbacks.

On 31/01/17 16:53, Edgar E. Iglesias wrote:
On Wed, Jan 25, 2017 at 06:53:20PM +0000, Julien Grall wrote:
On 24/01/17 20:07, Stefano Stabellini wrote:
On Tue, 24 Jan 2017, Julien Grall wrote:
For generic host bridge, the initialization is inexistent. However some host
bridge (e.g xgene, xilinx) may require some specific setup and also
configuring clocks. Given that Xen only requires to access the configuration
space, I was thinking to let DOM0 initialization the host bridge. This would
avoid to import a lot of code in Xen, however this means that we need to
know when the host bridge has been initialized before accessing the
configuration space.


Yes, that's correct.
There's a sequence on the ZynqMP that involves assiging Gigabit Transceivers
to PCI (GTs are shared among PCIe, USB, SATA and the Display Port),
enabling clocks and configuring a few registers to enable ECAM and MSI.

I'm not sure if this could be done prior to starting Xen. Perhaps.
If so, bootloaders would have to know a head of time what devices
the GTs are supposed to be configured for.

I've got further questions regarding the Gigabit Transceivers. You mention they are shared, do you mean that multiple devices can use a GT at the same time? Or the software is deciding at startup which device will use a given GT? If so, how does the software make this decision?

        - For all other host bridges => I don't know if there are host bridges
falling under this category. I also don't have any idea how to handle this.


Otherwise, if Dom0 is the only one to drive the physical host bridge,
and Xen is the one to provide the emulated host bridge, how are DomU PCI
config reads and writes supposed to work in details?

I think I have answered to this question with my explanation above. Let me
know if it is not the case.

 How is MSI configuration supposed to work?

For GICv3 ITS, the MSI will be configured with the eventID (it is uniq
per-device) and the address of the doorbell. The linkage between the LPI and
"MSI" will be done through the ITS.

For GICv2m, the MSI will be configured with an SPIs (or offset on some
GICv2m) and the address of the doorbell. Note that for DOM0 SPIs are mapped
1:1.

So in both case, I don't think it is necessary to trap MSI configuration for
DOM0. This may not be true if we want to handle other MSI controller.

I have in mind the xilinx MSI controller (embedded in the host bridge? [4])
and xgene MSI controller ([5]). But I have no idea how they work and if we
need to support them. Maybe Edgar could share details on the Xilinx one?


The Xilinx controller has 2 dedicated SPIs and pages for MSIs. AFAIK, there's no
way to protect the MSI doorbells from mal-configured end-points raising 
malicious EventIDs.
So perhaps trapped config accesses from domUs can help by adding this protection
as drivers configure the device.

On Linux, Once MSI's hit, the kernel takes the SPI interrupts, reads
out the EventID from a FIFO in the controller and injects a new IRQ into
the kernel.

It might be early to ask, but how do you expect MSI to work with DOMU on your hardware? Does your MSI controller supports virtualization? Or are you looking for a different way to inject MSI?


I hope that helps!

It helped thank you!

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.