|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC] ARM PCI Passthrough design document
Hi Julien, On 5/29/2017 11:44 PM, Julien Grall wrote: I believe in x86 case dom0 and Xen do access the config space. In the context of pci device add hypercall.On 05/29/2017 03:30 AM, Manish Jaggi wrote:Hi Julien,Hello Manish,On 5/26/2017 10:44 PM, Julien Grall wrote:PCI pass-through allows the guest to receive full control of physical PCI devices. This means the guest will have full and direct access to the PCIdevice. ARM is supporting a kind of guest that exploits as much as possiblevirtualization support in hardware. The guest will rely on PV driver onlyfor IO (e.g block, network) and interrupts will come through the virtualized interrupt controller, therefore there are no big changes required within the kernel. As a consequence, it would be possible to replace PV drivers by assigning real devices to the guest for I/O access. Xen on ARM would therefore be able to run unmodified operating system.To achieve this goal, it looks more sensible to go towards emulating thehost bridge (there will be more details later).IIUC this means that domU would have an emulated host bridge and dom0 will see the actual host bridge?You don't want the hardware domain and Xen access the configuration space at the same time. So if Xen is in charge of the host bridge, then an emulated host bridge should be exposed to the hardware. Thats when the pci_config_XXX functions in xen are called. So in case of generic hb, xen will manage the config space and provide a emulated I/f to dom0, and accesses would be trapped by Xen. Essentially the goal is to scan all pci devices and register them with Xen (which in turn will configure the smmu). For a generic hb, this can be done either in dom0/xen. The only doubt here is what extra benefit the emulated hb give in case of dom0.Although, this is depending on who is in charge of the the host bridge. As you may have noticed, this design document is proposing two ways to handle configuration space access. At the moment any generic host bridge (see the definition in the design document) will be handled in Xen and the hardware domain will have an emulated host bridge. If your host bridges is not a generic one, then the hardware domain will be in charge of the host bridges, any configuration access from Xen will be forward to the hardware domain.At the moment, as part of the first implementation, we are only looking to implement a generic host bridge in Xen. We will decide on case by case basis for all the other host bridges whether we want to have the driver in Xen. agreed. [...]In the example given in the IORT spec, for pci devices not behind an SMMU,## IOMMU The IOMMU will be used to isolate the PCI device when accessing the memory (e.g DMA and MSI Doorbells). Often the IOMMU will be configured using a MasterID (aka StreamID for ARM SMMU) that can be deduced from the SBDF with the help of the firmware tables (see below). Whilst in theory, all the memory transactions issued by a PCI device should go through the IOMMU, on certain platforms some of the memory transaction maynot reach the IOMMU because they are interpreted by the host bridge. Forinstance, this could happen if the MSI doorbell is built into the PCI host bridge or for P2P traffic. See [6] for more details. XXX: I think this could be solved by using direct mapping (e.g GFN == MFN), this would mean the guest memory layout would be similar to the host one when PCI devices will be pass-throughed => Detail it.how would the writes from the device be protected.I realize the XXX paragraph is quite confusing. I am not trying to solve the problem where PCI devices are not protected behind an SMMU but platform where some transactions (e.g P2P or MSI doorbell access) are by-passing the SMMU.You may still want to allow PCI passthrough in that case, because you know that P2P cannot be done (or potentially disabled) and MSI doorbell access is protected (for instance a write in the ITS doorbell will be tagged with the device by the hardware). In order to support such platform you need to direct map the doorbel (e.g GFN == MFN) and carve out the P2P region from the guest memory map. Hence the suggestion to re-use the host memory layout for the guest.Note that it does not mean the RAM region will be direct mapped. It is only there to ease carving out memory region by-passed by the SMMU.[...]## ACPI ### Host bridges The static table MCFG (see 4.2 in [1]) will describe the host bridges availableat boot and supporting ECAM. Unfortunately, there are platforms out there this matters in the case of stage 2 MMIO mappings, see below [...] This approach is ok. But we could have more granular approach than trapping IMHO. For ACPI-xen parses MCFG and can map pci hb (emulated / original) in stage2 for dom0 -device MMIO can be mapped in stage2 alongside pci_device_add call . What do you think? Regards, _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |