[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v6 4/5] [FUTURE] xen/arm: enable vPCI for domUs
On Fri, 1 Dec 2023, Roger Pau Monné wrote: > On Mon, Nov 13, 2023 at 05:21:13PM -0500, Stewart Hildebrand wrote: > > @@ -1618,6 +1630,14 @@ int iommu_do_pci_domctl( > > bus = PCI_BUS(machine_sbdf); > > devfn = PCI_DEVFN(machine_sbdf); > > > > + if ( needs_vpci(d) && !has_vpci(d) ) > > + { > > + printk(XENLOG_G_WARNING "Cannot assign %pp to %pd: vPCI > > support not enabled\n", > > + &PCI_SBDF(seg, bus, devfn), d); > > + ret = -EPERM; > > + break; > > I think this is likely too restrictive going forward. The current > approach is indeed to enable vPCI on a per-domain basis because that's > how PVH dom0 uses it, due to being unable to use ioreq servers. > > If we start to expose vPCI suport to guests the interface should be on > a per-device basis, so that vPCI could be enabled for some devices, > while others could still be handled by ioreq servers. > > We might want to add a new flag to xen_domctl_assign_device (used by > XEN_DOMCTL_assign_device) in order to signal whether the device will > use vPCI. Actually I don't think this is a good idea. I am all for flexibility but supporting multiple different configurations comes at an extra cost for both maintainers and contributors. I think we should try to reduce the amount of configurations we support rather than increasing them (especially on x86 where we have PV, PVH, HVM). I don't think we should enable IOREQ servers to handle PCI passthrough for PVH guests and/or guests with vPCI. If the domain has vPCI, PCI Passthrough can be handled by vPCI just fine. I think this should be a good anti-feature to have (a goal to explicitly not add this feature) to reduce complexity. Unless you see a specific usecase to add support for it?
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |