[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] xen/x86: allow Dom0 PVH to call XENMEM_exchange



On Wed, May 07, 2025 at 04:02:11PM -0700, Stefano Stabellini wrote:
> On Tue, 6 May 2025, Roger Pau Monné wrote:
> > On Mon, May 05, 2025 at 11:11:10AM -0700, Stefano Stabellini wrote:
> > > On Mon, 5 May 2025, Roger Pau Monné wrote:
> > > > On Mon, May 05, 2025 at 12:40:18PM +0200, Marek Marczykowski-Górecki 
> > > > wrote:
> > > > > On Mon, Apr 28, 2025 at 01:00:01PM -0700, Stefano Stabellini wrote:
> > > > > > On Mon, 28 Apr 2025, Jan Beulich wrote:
> > > > > > > On 25.04.2025 22:19, Stefano Stabellini wrote:
> > > > > > > > From: Xenia Ragiadakou <Xenia.Ragiadakou@xxxxxxx>
> > > > > > > > 
> > > > > > > > Dom0 PVH might need XENMEM_exchange when passing contiguous 
> > > > > > > > memory
> > > > > > > > addresses to firmware or co-processors not behind an IOMMU.
> > > > > > > 
> > > > > > > I definitely don't understand the firmware part: It's subject to 
> > > > > > > the
> > > > > > > same transparent P2M translations as the rest of the VM; it's just
> > > > > > > another piece of software running there.
> > > > > > > 
> > > > > > > "Co-processors not behind an IOMMU" is also interesting; a more
> > > > > > > concrete scenario might be nice, yet I realize you may be limited 
> > > > > > > in
> > > > > > > what you're allowed to say.
> > > > > > 
> > > > > > Sure. On AMD x86 platforms there is a co-processor called PSP 
> > > > > > running
> > > > > > TEE firmware. The PSP is not behind an IOMMU. Dom0 needs 
> > > > > > occasionally to
> > > > > > pass addresses to it.  See drivers/tee/amdtee/ and
> > > > > > include/linux/psp-tee.h in Linux.
> > > > > 
> > > > > We had (have?) similar issue with amdgpu (for integrated graphics) - 
> > > > > it
> > > > > uses PSP for loading its firmware. With PV dom0 there is a workaround 
> > > > > as
> > > > > dom0 kinda knows MFN. I haven't tried PVH dom0 on such system yet, 
> > > > > but I
> > > > > expect troubles (BTW, hw1 aka zen2 gitlab runner has amdgpu, and it's
> > > > > the one I used for debugging this issue).
> > > > 
> > > > That's ugly, and problematic when used in conjunction with AMD-SEV.
> > > > 
> > > > I wonder if Xen could emulate/mediate some parts of the PSP for dom0
> > > > to use, while allowing Xen to be the sole owner of the device.  Having
> > > > both Xen and dom0 use it (for different purposes) seems like asking
> > > > for trouble.  But I also have no idea how complex the PSP interface
> > > > is, neither whether it would be feasible to emulate the
> > > > interfaces/registers needed for firmware loading.
> > > 
> > > Let me take a step back from the PSP for a moment. I am not opposed to a
> > > PSP mediator in Xen, but I want to emphasize that the issue is more
> > > general and extends well beyond the PSP.
> > > 
> > > In my years working in embedded systems, I have consistently seen cases
> > > where Dom0 needs to communicate with something that does not go through
> > > the IOMMU. This could be due to special firmware on a co-processor, a
> > > hardware erratum that prevents proper IOMMU usage, or a high-bandwidth
> > > device that technically supports the IOMMU but performs poorly unless
> > > the IOMMU is disabled. All of these are real-world examples that I have
> > > seen personally.
> > 
> > I wouldn't be surprised, classic PV dom0 avoided those issues because
> > it was dealing directly with host addresses (mfns), but that's not the
> > case with PVH dom0.
> 
> Yeah
> 
> 
> > > In my opinion, we definitely need a solution like this patch for Dom0
> > > PVH to function correctly in all scenarios.
> > 
> > I'm not opposed to having such interface available for PVH hardware
> > domains.  I find it ugly, but I don't see much other way to deal with
> > those kind of "devices".  Xen mediating accesses for each one of them
> > is unlikely to be doable.
> > 
> > How do you hook this exchange interface into Linux to differentiate
> > which drivers need to use mfns when interacting with the hardware?
> 
> In the specific case we have at hands the driver is in Linux userspace
> and is specially-written for our use case. It is not generic, so we
> don't have this problem. But your question is valid.

Oh, so you then have some kind of ioctl interface that does the memory
exchange and bouncing inside of the kernel on behalf of the user-space
side I would think?

> In Linux, the issue is hidden behind drivers/xen/swiotlb-xen.c and
> xen_arch_need_swiotlb. There are a few options:
> - force swiotlb bounce for everything on the problematic SoC
> - edit xen_arch_need_swiotlb to return true for the problematic device
> - introduce a kernel command line option to specify which device
>   xen_arch_need_swiotlb should return true for

Isn't it a bit misleading to use the swiotlb for this purpose?  Won't
this usage of the swiotlb (to bounce from gfns to mfns) create issues
if there's any devices that have a DMA physical address limitation and
also needs to use the swiotlb while being behind the IOMMU?

> - introduce an ACPI table with the relevant info

Hm, best option might be an ACPI table so that Xen can signal to the
hardware domain whether communication with the device must be done
using mfns, or if accesses are mediated and hence can be done using
gfns?

It's a bit cumbersome however to have to resort to an ACPI table just
for this.  Not sure whether we could expand one of the existing tables
already under Xen control (STAO?) to contain this information.  It all
looks a bit ad-hoc.

I think we need some kind of list of devices that are not behind the
IOMMU, but I have no idea what URI to use to identify those.  I assume
the PSP doesn't have a PCI SBDF (as it's not on the PCI bus?).  Maybe
by ACPI path?

Or maybe it's fine to always communicate with those non-translated
devices using MFNs, and even if we later add some kind of PSP
mediation (so that both Xen and dom0 can use it), accesses by dom0
will still be assumed to be using MFNs, and thus need no translation.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.