[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v1 00/16] dma-mapping: migrate to physical address-based API
On 07.08.2025 16:19, Jason Gunthorpe wrote: > On Mon, Aug 04, 2025 at 03:42:34PM +0300, Leon Romanovsky wrote: >> Changelog: >> v1: >> * Added new DMA_ATTR_MMIO attribute to indicate >> PCI_P2PDMA_MAP_THRU_HOST_BRIDGE path. >> * Rewrote dma_map_* functions to use thus new attribute >> v0: https://lore.kernel.org/all/cover.1750854543.git.leon@xxxxxxxxxx/ >> ------------------------------------------------------------------------ >> >> This series refactors the DMA mapping to use physical addresses >> as the primary interface instead of page+offset parameters. This >> change aligns the DMA API with the underlying hardware reality where >> DMA operations work with physical addresses, not page structures. > Lets elaborate this as Robin asked: > > This series refactors the DMA mapping API to provide a phys_addr_t > based, and struct-page free, external API that can handle all the > mapping cases we want in modern systems: > > - struct page based cachable DRAM > - struct page MEMORY_DEVICE_PCI_P2PDMA PCI peer to peer non-cachable MMIO > - struct page-less PCI peer to peer non-cachable MMIO > - struct page-less "resource" MMIO > > Overall this gets much closer to Matthew's long term wish for > struct-pageless IO to cachable DRAM. The remaining primary work would > be in the mm side to allow kmap_local_pfn()/phys_to_virt() to work on > phys_addr_t without a struct page. > > The general design is to remove struct page usage entirely from the > DMA API inner layers. For flows that need to have a KVA for the > physical address they can use kmap_local_pfn() or phys_to_virt(). This > isolates the struct page requirements to MM code only. Long term all > removals of struct page usage are supporting Matthew's memdesc > project which seeks to substantially transform how struct page works. > > Instead make the DMA API internals work on phys_addr_t. Internally > there are still dedicated 'page' and 'resource' flows, except they are > now distinguished by a new DMA_ATTR_MMIO instead of by callchain. Both > flows use the same phys_addr_t. > > When DMA_ATTR_MMIO is specified things work similar to the existing > 'resource' flow. kmap_local_pfn(), phys_to_virt(), phys_to_page(), > pfn_valid(), etc are never called on the phys_addr_t. This requires > rejecting any configuration that would need swiotlb. CPU cache > flushing is not required, and avoided, as ATTR_MMIO also indicates the > address have no cachable mappings. This effectively removes any > DMA API side requirement to have struct page when DMA_ATTR_MMIO is > used. > > In the !DMA_ATTR_MMIO mode things work similarly to the 'page' flow, > except on the common path of no cache flush, no swiotlb it never > touches a struct page. When cache flushing or swiotlb copying > kmap_local_pfn()/phys_to_virt() are used to get a KVA for CPU > usage. This was already the case on the unmap side, now the map side > is symmetric. > > Callers are adjusted to set DMA_ATTR_MMIO. Existing 'resource' users > must set it. The existing struct page based MEMORY_DEVICE_PCI_P2PDMA > path must also set it. This corrects some existing bugs where iommu > mappings for P2P MMIO were improperly marked IOMMU_CACHE. > > Since ATTR_MMIO is made to work with all the existing DMA map entry > points, particularly dma_iova_link(), this finally allows a way to use > the new DMA API to map PCI P2P MMIO without creating struct page. The > VFIO DMABUF series demonstrates how this works. This is intended to > replace the incorrect driver use of dma_map_resource() on PCI BAR > addresses. > > This series does the core code and modern flows. A followup series > will give the same treatement to the legacy dma_ops implementation. Thanks for the elaborate description, that's something that was missing in the previous attempt. I read again all the previous discussion and this explanation and there are still two things that imho needs more clarification. First - basing the API on the phys_addr_t. Page based API had the advantage that it was really hard to abuse it and call for something that is not 'a normal RAM'. I initially though that phys_addr_t based API will somehow simplify arch specific implementation, as some of them indeed rely on phys_addr_t internally, but I missed other things pointed by Robin. Do we have here any alternative? Second - making dma_map_phys() a single API to handle all cases. Do we really need such single function to handle all cases? To handle P2P case, the caller already must pass DMA_ATTR_MMIO, so it must somehow keep such information internally. Cannot it just call existing dma_map_resource(), so there will be clear distinction between these 2 cases (DMA to RAM and P2P DMA)? Do we need additional check for DMA_ATTR_MMIO for every typical DMA user? I know that branching is cheap, but this will probably increase code size for most of the typical users for no reason. Best regards -- Marek Szyprowski, PhD Samsung R&D Institute Poland
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |