[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v4 14/16] block-dma: migrate to dma_map_phys instead of map_page
- To: Marek Szyprowski <m.szyprowski@xxxxxxxxxxx>
- From: Keith Busch <kbusch@xxxxxxxxxx>
- Date: Tue, 2 Sep 2025 15:59:37 -0600
- Cc: Leon Romanovsky <leon@xxxxxxxxxx>, Leon Romanovsky <leonro@xxxxxxxxxx>, Jason Gunthorpe <jgg@xxxxxxxxxx>, Abdiel Janulgue <abdiel.janulgue@xxxxxxxxx>, Alexander Potapenko <glider@xxxxxxxxxx>, Alex Gaynor <alex.gaynor@xxxxxxxxx>, Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>, Christoph Hellwig <hch@xxxxxx>, Danilo Krummrich <dakr@xxxxxxxxxx>, iommu@xxxxxxxxxxxxxxx, Jason Wang <jasowang@xxxxxxxxxx>, Jens Axboe <axboe@xxxxxxxxx>, Joerg Roedel <joro@xxxxxxxxxx>, Jonathan Corbet <corbet@xxxxxxx>, Juergen Gross <jgross@xxxxxxxx>, kasan-dev@xxxxxxxxxxxxxxxx, linux-block@xxxxxxxxxxxxxxx, linux-doc@xxxxxxxxxxxxxxx, linux-kernel@xxxxxxxxxxxxxxx, linux-mm@xxxxxxxxx, linux-nvme@xxxxxxxxxxxxxxxxxxx, linuxppc-dev@xxxxxxxxxxxxxxxx, linux-trace-kernel@xxxxxxxxxxxxxxx, Madhavan Srinivasan <maddy@xxxxxxxxxxxxx>, Masami Hiramatsu <mhiramat@xxxxxxxxxx>, Michael Ellerman <mpe@xxxxxxxxxxxxxx>, "Michael S. Tsirkin" <mst@xxxxxxxxxx>, Miguel Ojeda <ojeda@xxxxxxxxxx>, Robin Murphy <robin.murphy@xxxxxxx>, rust-for-linux@xxxxxxxxxxxxxxx, Sagi Grimberg <sagi@xxxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Steven Rostedt <rostedt@xxxxxxxxxxx>, virtualization@xxxxxxxxxxxxxxx, Will Deacon <will@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
- Delivery-date: Tue, 02 Sep 2025 21:59:59 +0000
- List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
On Tue, Sep 02, 2025 at 10:49:48PM +0200, Marek Szyprowski wrote:
> On 19.08.2025 19:36, Leon Romanovsky wrote:
> > @@ -87,8 +87,8 @@ static bool blk_dma_map_bus(struct blk_dma_iter *iter,
> > struct phys_vec *vec)
> > static bool blk_dma_map_direct(struct request *req, struct device
> > *dma_dev,
> > struct blk_dma_iter *iter, struct phys_vec *vec)
> > {
> > - iter->addr = dma_map_page(dma_dev, phys_to_page(vec->paddr),
> > - offset_in_page(vec->paddr), vec->len, rq_dma_dir(req));
> > + iter->addr = dma_map_phys(dma_dev, vec->paddr, vec->len,
> > + rq_dma_dir(req), 0);
> > if (dma_mapping_error(dma_dev, iter->addr)) {
> > iter->status = BLK_STS_RESOURCE;
> > return false;
>
> I wonder where is the corresponding dma_unmap_page() call and its change
> to dma_unmap_phys()...
You can't do that in the generic layer, so it's up to the caller. The
dma addrs that blk_dma_iter yield are used in a caller specific
structure. For example, for NVMe, it goes into an NVMe PRP. The generic
layer doesn't know what that is, so the driver has to provide the
unmapping.
|