[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v4 14/16] block-dma: migrate to dma_map_phys instead of map_page
- To: Keith Busch <kbusch@xxxxxxxxxx>
- From: Jason Gunthorpe <jgg@xxxxxxxxxx>
- Date: Tue, 2 Sep 2025 20:24:57 -0300
- Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none
- Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=5WhB/bSCKzhLBnuuHt7TF4nSB4uzH/8ebtNNvqKnLZo=; b=Bp8uobg7Bc/BzSSDdn/btXqGPCvh/sMoJV2WrMiwae2gLXXMNa74iBFMOK8/Kd3nIe9DK65EPLAHjJyQmW8DOXDs0DsFGRmQLve8gYV8a6tY17tfpTo8zCgFv5+3GgNHnagEytgV8j4HLlBahFH4W76yI1oHiycXqbmiQtuJxjyhuu6icEqsWPa7LoOyNl2pEd2PzvwHOQbnGhYo3Fzzd9EA/sudkNK/l5w4yDPSkDAmsiZvXW6QfcWhCsAF2m1x/v5zC7XkqaqZe82trwitDyL1DcbG/w15m/gyz76Y438tMp0oBXpcw+HMFmc6FRhQx7LSn25C6eW1Cargibj6zQ==
- Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=EPPUGTpBQtvDStMLISia+THnTG3E+xG7M30x/PlBo2tAa/Nr/FmSG//W8J1ytvmYfhHAim/W9vQgGujfcSuDILGI3MLD6pI6zS/OBY6OjTRcOngtk4jHhyGUh7jyRRB7szMQuPkC0FVnIUpnLJxS17iQdNA6AwS8d3tSA1dKuSftvGm2YCpQt3teeyNM/4nDnpsnuHxKCtSsm4ZxJ7lDZKrevojcXGC2x64W2xWlegQeMIK2rdxpSOR3mUEg+68waaoLnziZs6CRUl9OORYmCDYjZdsiUhla3kDKGqETPLX9l+jUVxkpUfFVQZaaMQUTmy9sd4idVJ3XjDo1oc5Fqg==
- Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com;
- Cc: Marek Szyprowski <m.szyprowski@xxxxxxxxxxx>, Leon Romanovsky <leon@xxxxxxxxxx>, Leon Romanovsky <leonro@xxxxxxxxxx>, Abdiel Janulgue <abdiel.janulgue@xxxxxxxxx>, Alexander Potapenko <glider@xxxxxxxxxx>, Alex Gaynor <alex.gaynor@xxxxxxxxx>, Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>, Christoph Hellwig <hch@xxxxxx>, Danilo Krummrich <dakr@xxxxxxxxxx>, iommu@xxxxxxxxxxxxxxx, Jason Wang <jasowang@xxxxxxxxxx>, Jens Axboe <axboe@xxxxxxxxx>, Joerg Roedel <joro@xxxxxxxxxx>, Jonathan Corbet <corbet@xxxxxxx>, Juergen Gross <jgross@xxxxxxxx>, kasan-dev@xxxxxxxxxxxxxxxx, linux-block@xxxxxxxxxxxxxxx, linux-doc@xxxxxxxxxxxxxxx, linux-kernel@xxxxxxxxxxxxxxx, linux-mm@xxxxxxxxx, linux-nvme@xxxxxxxxxxxxxxxxxxx, linuxppc-dev@xxxxxxxxxxxxxxxx, linux-trace-kernel@xxxxxxxxxxxxxxx, Madhavan Srinivasan <maddy@xxxxxxxxxxxxx>, Masami Hiramatsu <mhiramat@xxxxxxxxxx>, Michael Ellerman <mpe@xxxxxxxxxxxxxx>, "Michael S. Tsirkin" <mst@xxxxxxxxxx>, Miguel Ojeda <ojeda@xxxxxxxxxx>, Robin Murphy <robin.murphy@xxxxxxx>, rust-for-linux@xxxxxxxxxxxxxxx, Sagi Grimberg <sagi@xxxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Steven Rostedt <rostedt@xxxxxxxxxxx>, virtualization@xxxxxxxxxxxxxxx, Will Deacon <will@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
- Delivery-date: Tue, 02 Sep 2025 23:25:20 +0000
- List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
On Tue, Sep 02, 2025 at 03:59:37PM -0600, Keith Busch wrote:
> On Tue, Sep 02, 2025 at 10:49:48PM +0200, Marek Szyprowski wrote:
> > On 19.08.2025 19:36, Leon Romanovsky wrote:
> > > @@ -87,8 +87,8 @@ static bool blk_dma_map_bus(struct blk_dma_iter *iter,
> > > struct phys_vec *vec)
> > > static bool blk_dma_map_direct(struct request *req, struct device
> > > *dma_dev,
> > > struct blk_dma_iter *iter, struct phys_vec *vec)
> > > {
> > > - iter->addr = dma_map_page(dma_dev, phys_to_page(vec->paddr),
> > > - offset_in_page(vec->paddr), vec->len, rq_dma_dir(req));
> > > + iter->addr = dma_map_phys(dma_dev, vec->paddr, vec->len,
> > > + rq_dma_dir(req), 0);
> > > if (dma_mapping_error(dma_dev, iter->addr)) {
> > > iter->status = BLK_STS_RESOURCE;
> > > return false;
> >
> > I wonder where is the corresponding dma_unmap_page() call and its change
> > to dma_unmap_phys()...
>
> You can't do that in the generic layer, so it's up to the caller. The
> dma addrs that blk_dma_iter yield are used in a caller specific
> structure. For example, for NVMe, it goes into an NVMe PRP. The generic
> layer doesn't know what that is, so the driver has to provide the
> unmapping.
To be specific I think it is this hunk in another patch that matches
the above:
@@ -682,11 +682,15 @@ static void nvme_free_prps(struct request *req)
{
struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
struct nvme_queue *nvmeq = req->mq_hctx->driver_data;
+ unsigned int attrs = 0;
unsigned int i;
+ if (req->cmd_flags & REQ_MMIO)
+ attrs = DMA_ATTR_MMIO;
+
for (i = 0; i < iod->nr_dma_vecs; i++)
- dma_unmap_page(nvmeq->dev->dev, iod->dma_vecs[i].addr,
- iod->dma_vecs[i].len, rq_dma_dir(req));
+ dma_unmap_phys(nvmeq->dev->dev, iod->dma_vecs[i].addr,
+ iod->dma_vecs[i].len, rq_dma_dir(req), attrs);
And it is functionally fine to split the series like this because
unmap_page is a nop around unmap_phys:
void dma_unmap_page_attrs(struct device *dev, dma_addr_t addr, size_t size,
enum dma_data_direction dir, unsigned long attrs)
{
if (unlikely(attrs & DMA_ATTR_MMIO))
return;
dma_unmap_phys(dev, addr, size, dir, attrs);
}
EXPORT_SYMBOL(dma_unmap_page_attrs);
Jason
|