[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH] xen/x86: allow Dom0 PVH to call XENMEM_exchange
On Fri, May 09, 2025 at 02:10:03PM -0700, Stefano Stabellini wrote: > On Fri, 9 May 2025, Roger Pau Monné wrote: > > On Thu, May 08, 2025 at 04:25:28PM -0700, Stefano Stabellini wrote: > > > On Thu, 8 May 2025, Roger Pau Monné wrote: > > > > On Wed, May 07, 2025 at 04:02:11PM -0700, Stefano Stabellini wrote: > > > > > On Tue, 6 May 2025, Roger Pau Monné wrote: > > > > > > On Mon, May 05, 2025 at 11:11:10AM -0700, Stefano Stabellini wrote: > > > > > > > In my opinion, we definitely need a solution like this patch for > > > > > > > Dom0 > > > > > > > PVH to function correctly in all scenarios. > > > > > > > > > > > > I'm not opposed to having such interface available for PVH hardware > > > > > > domains. I find it ugly, but I don't see much other way to deal > > > > > > with > > > > > > those kind of "devices". Xen mediating accesses for each one of > > > > > > them > > > > > > is unlikely to be doable. > > > > > > > > > > > > How do you hook this exchange interface into Linux to differentiate > > > > > > which drivers need to use mfns when interacting with the hardware? > > > > > > > > > > In the specific case we have at hands the driver is in Linux userspace > > > > > and is specially-written for our use case. It is not generic, so we > > > > > don't have this problem. But your question is valid. > > > > > > > > Oh, so you then have some kind of ioctl interface that does the memory > > > > exchange and bouncing inside of the kernel on behalf of the user-space > > > > side I would think? > > > > > > I am not sure... Xenia might know more than me here. > > > > One further question I have regarding this approach: have you > > considered just populating an empty p2m space with contiguous physical > > memory instead of exchanging an existing area? That would increase > > dom0 memory usage, but would prevent super page shattering in the p2m. > > You could use a dom0_mem=X,max:X+Y command line option, where Y > > would be your extra room for swiotlb-xen bouncing usage. > > > > XENMEM_increase_reservation documentation notes such hypercall already > > returns the base MFN of the allocated page (see comment in > > xen_memory_reservation struct declaration). > > XENMEM_exchange is the way it has been implemented traditionally in > Linux swiotlb-xen (it has been years). But your idea is good. > > Another, more drastic, idea would be to attempt to map Dom0 PVH memory > 1:1 at domain creation time like we do on ARM. If not all of it, as much > as possible. That would resolve the problem very efficiently. We could > communicate to Dom0 PVH the range that is 1:1 in one of the initial data > structures, and that would be the end of it. Yes, I wonder however whether attempting this would result in a fair amount of page-shattering if we need to cater for pages in-use by Xen that cannot be identity mapped. Maybe a middle ground: on the Xen command line the admin specifies the amount of contiguous identity mapped memory required, and Xen attempts to allocate and identity map it on dom0 p2m? It would be nice to signal such regions on the memory map itself. Sadly I don't see a way to do it using the UEFI memory map format. There's a "EFI_MEMORY_ISA_MASK" region in the attribute field, but I don't think we can hijack this for Xen purposes. There's also a 32bit padding field in EFI_MEMORY_DESCRIPTOR after the type field, but using that is possibly risky going forward? I don't think UEFI can repurpose that padding easily, but we might not want to bet on that. We also have the need to signal which regions are safe to use as foreign/grant mapping scratch space, so it would be good to use an interface that can be expanded. IOW: have a way to add extra Xen specific attributes to memory regions. Anyway, for the patch at hand: I see no reason to prevent XENMEM_exchange usage. I think it's maybe not the best option because of the super-page shattering consequences, but I assume you already have a working solution based on this. > > > > > In Linux, the issue is hidden behind drivers/xen/swiotlb-xen.c and > > > > > xen_arch_need_swiotlb. There are a few options: > > > > > - force swiotlb bounce for everything on the problematic SoC > > > > > - edit xen_arch_need_swiotlb to return true for the problematic device > > > > > - introduce a kernel command line option to specify which device > > > > > xen_arch_need_swiotlb should return true for > > > > > > > > Isn't it a bit misleading to use the swiotlb for this purpose? Won't > > > > this usage of the swiotlb (to bounce from gfns to mfns) create issues > > > > if there's any devices that have a DMA physical address limitation and > > > > also needs to use the swiotlb while being behind the IOMMU? > > > > > > When I wrote swiotlb, I meant swiotlb-xen (drivers/xen/swiotlb-xen.c). > > > We have been using it exactly for this kind of address translations so > > > far. It can also deal with cases where genuine bouncing needs to happen. > > > > Oh, I see. I've assumed you meant the generic Linux swiotlb. > > > > So you have repurposed swiotlb-xen to be used on PVH for this purpose. > > I think (currently?) swiotlb-xen is unconditionally disabled for > > HVM/PVH guests? > > Yes, I have repurposed swiotlb-xen for something similar this years ago > on ARM. I was planning to do the same for PVH x86. Today, swiotlb-xen is > used for ARM Dom0, which as you know is HVM/PVH from Linux point of view. Sounds good, no objection from my side. Regards, Roger.
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |