[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: Mapping memory into a domain
I suppose this is still about multiplexing the GPU driver the way we last discussed at Xen Summit? On Mon May 5, 2025 at 12:51 AM CEST, Demi Marie Obenour wrote: > What are the appropriate Xen internal functions for: > > 1. Turning a PFN into an MFN? > 2. Mapping an MFN into a guest? > 3. Unmapping that MFN from a guest? The p2m is the single source of truth about such mappings. This is all racy business. You want to keep the p2m lock for the full duration of whatever operation you wish do, or you risk another CPU taking it and pulling the rug under your feet at the most inconvenient time. In general all this faff is hidden under way too many layers beneath copy_{to,from}_guest(). Other p2m manipulation high-level constructs that might do interesting things worth looking at may be {map,unmap}_mmio_region() Note that not every pfn has an associated mfn. Not even every valid pfn has necessarily an associated mfn (there's pod). And all of this is volatile business in the presence of a baloon driver or vPCI placing mmio windows over guest memory. In general anything up this alley would need a cohesive pair for map/unmap and a credible plan for concurrency and how it's all handled in conjunction with other bits that touch the p2m. > > The first patch I am going to send with this information is a documentation > patch so that others do not need to figure this out for themselves. > I remember being unsure even after looking through the source code, which > is why I am asking here. That's not surprising. There's per-arch stuff, per-p2mtype stuff, per-guesttype stuff. Plus madness like on-demand memory. It's no wonder such helpers don't exist and the general manipulations are hard to explain. Cheers, Alejandro
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |