[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v6 5/6] xen/x86: add PHYSDEVOP_interrupt_control



On Fri, Sep 20, 2019 at 06:02:50PM +0200, Marek Marczykowski-Górecki wrote:
> On Fri, Sep 20, 2019 at 12:10:09PM +0200, Jan Beulich wrote:
> > On 14.09.2019 17:37, Marek Marczykowski-Górecki  wrote:
> > > Allow device model running in stubdomain to enable/disable INTx/MSI(-X),
> > > bypassing pciback. While pciback is still used to access config space
> > > from within stubdomain, it refuse to write to
> > > PCI_MSI_FLAGS_ENABLE/PCI_MSIX_FLAGS_ENABLE/PCI_COMMAND_INTX_DISABLE
> > > in non-permissive mode. Which is the right thing to do for PV domain
> > > (the main use case for pciback), as PV domain should use XEN_PCI_OP_*
> > > commands for that. Unfortunately those commands are not good for
> > > stubdomain use, as they configure MSI in dom0's kernel too, which should
> > > not happen for HVM domain.
> > 
> > Why the "for HVM domain" here? I.e. why would this be correct for
> > a PV domain? Besides my dislike for such a bypass (imo all of the
> > handling should go through pciback, or none of it) I continue to
> > wonder whether the problem can't be addressed by a pciback change.
> > And even if not, I'd still wonder whether the request shouldn't go
> > through pciback, to retain proper layering. Ultimately it may be
> > better to have even the map/unmap go through pciback (it's at
> > least an apparent violation of the original physdev-op model that
> > these two are XSM_DM_PRIV).
> 
> Technically it should be possible to move this part to pciback, and in
> fact this is what I've considered in the first version of this series.
> But Roger points out on each version[1] of this series that pciback is
> meant to serve *PV* domains, where a PCI passthrough is a completely
> different different beast. In fact, I even consider that using pcifront
> in a Linux stubdomain as a proxy for qemu there may be a bad idea (one
> needs to be careful to avoid stubdomain kernel fighting with qemu about
> device state).

Right, it's (as show by this series) tricky to proxy HVM passthrough
over the PV pciif protocol used by pcifront and pciback, because that
protocol was designed for PV guests pci-passthrough.

While it's indeed possible to expand the pciif protocol so it's also
suitable to proxy HVM passthrough by a QEMU stubdomain that would
require changes to Linux pciback at least (and to pcifront maybe?),
and it's usage would need to be limited to stubdomains only to not
risk expanding the attack surface of pciback.

> Roger, what is the state of Xen internal vPCI? If handling PCI
> passthrough in Xen (or maybe standalone emulator), without qemu help is
> going to happen sooner than later (I guess not 4.13, but maybe 4.14?),
> then maybe this whole patch doesn't make sense as a temporary measure?

I've got an initial series posted to convert vPCI to an internal ioreq
server, so it can co-exist with other ioreq servers that also trap
accesses to the pci configuration space. Once that's done the main
work will be to make vPCI safe for unprivileged domains. Right now
vPCI is too permissive since it's designed for dom0 only.

I hope 4.14 will have at least experimental code for vPCI for domUs,
but I cannot guarantee anything at this point.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.