[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v1 06/16] iommu/dma: extend iommu_dma_*map_phys API to handle MMIO memory
- To: Leon Romanovsky <leon@xxxxxxxxxx>
- From: Jason Gunthorpe <jgg@xxxxxxxxxx>
- Date: Thu, 7 Aug 2025 09:07:15 -0300
- Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none
- Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=03milDNUHNfZEReC3MBhS982jj7JRdpYUpzYTMR7X0w=; b=adMiGpMMBcE4ybFOcngMUwoeO4U1sYS6egTAHwmeLxgkqN79ZLAD2dGoWpkG2/W96qpRsWHZsgpJ7iWaHaxTx1Rpt81GYR++16AoEuWFjqWFx3RrczLzO6kTXXnlJl7vMRyg0yzanOSfs4r2V3uCmEYs0Ij3xz4agx2j3odCj2Wcy3NlXMVeUlIs8jtSgC4S8RoZwWRDycl99Jk6sE7kz4DPAQoJgUvph7qB9vaNL7HGqGz/dk7x9n8xcCK2RcuSOLS2PQRpO3zAA4VQejOKb9v4kShnBO4S1Jqi0rEaqKMu97OPmS4acprn6vQLSzQO603vFV/ThwlmARzzOyqs0w==
- Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=Fmbjdm0YYvQX3NHptTNcgJSEIDsX5MasBpOwKCZ1LfEU+ySBuC3nu24SslkObfDngkjZqMo6dLkASJTqVVQW+vBWYvPkeRUosrx21jkNNZ4FxK9GlXCOnaLbV+aduUb/S696Pw5LrS+AVusUw/CIAlDp+ygzgllK1n983Cyh5pMgqJ//TRLSzyg+xG6k2GwsYeJneI9u5I9vQ7RSg6tkz11qjd4cFbgVDMgxTM9vC4Vi3gdBP6zu7TEhEtISqVcqSGMIu/2kzmXVAVbFwcpjzuMbXXCUa8HbFdb2vqe+GvtpsHoSfrzJQm74sBzXuC52ya0ZDszJlrrNwA4P0cHpfg==
- Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com;
- Cc: Marek Szyprowski <m.szyprowski@xxxxxxxxxxx>, Leon Romanovsky <leonro@xxxxxxxxxx>, Abdiel Janulgue <abdiel.janulgue@xxxxxxxxx>, Alexander Potapenko <glider@xxxxxxxxxx>, Alex Gaynor <alex.gaynor@xxxxxxxxx>, Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>, Christoph Hellwig <hch@xxxxxx>, Danilo Krummrich <dakr@xxxxxxxxxx>, iommu@xxxxxxxxxxxxxxx, Jason Wang <jasowang@xxxxxxxxxx>, Jens Axboe <axboe@xxxxxxxxx>, Joerg Roedel <joro@xxxxxxxxxx>, Jonathan Corbet <corbet@xxxxxxx>, Juergen Gross <jgross@xxxxxxxx>, kasan-dev@xxxxxxxxxxxxxxxx, Keith Busch <kbusch@xxxxxxxxxx>, linux-block@xxxxxxxxxxxxxxx, linux-doc@xxxxxxxxxxxxxxx, linux-kernel@xxxxxxxxxxxxxxx, linux-mm@xxxxxxxxx, linux-nvme@xxxxxxxxxxxxxxxxxxx, linuxppc-dev@xxxxxxxxxxxxxxxx, linux-trace-kernel@xxxxxxxxxxxxxxx, Madhavan Srinivasan <maddy@xxxxxxxxxxxxx>, Masami Hiramatsu <mhiramat@xxxxxxxxxx>, Michael Ellerman <mpe@xxxxxxxxxxxxxx>, "Michael S. Tsirkin" <mst@xxxxxxxxxx>, Miguel Ojeda <ojeda@xxxxxxxxxx>, Robin Murphy <robin.murphy@xxxxxxx>, rust-for-linux@xxxxxxxxxxxxxxx, Sagi Grimberg <sagi@xxxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Steven Rostedt <rostedt@xxxxxxxxxxx>, virtualization@xxxxxxxxxxxxxxx, Will Deacon <will@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
- Delivery-date: Thu, 07 Aug 2025 12:07:34 +0000
- List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
On Mon, Aug 04, 2025 at 03:42:40PM +0300, Leon Romanovsky wrote:
> From: Leon Romanovsky <leonro@xxxxxxxxxx>
>
> Combine iommu_dma_*map_phys with iommu_dma_*map_resource interfaces in
> order to allow single phys_addr_t flow.
Some later patch deletes iommu_dma_map_resource() ? Mention that plan here?
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -1193,12 +1193,17 @@ static inline size_t iova_unaligned(struct
> iova_domain *iovad, phys_addr_t phys,
> dma_addr_t iommu_dma_map_phys(struct device *dev, phys_addr_t phys, size_t
> size,
> enum dma_data_direction dir, unsigned long attrs)
> {
> - bool coherent = dev_is_dma_coherent(dev);
> - int prot = dma_info_to_prot(dir, coherent, attrs);
> struct iommu_domain *domain = iommu_get_dma_domain(dev);
> struct iommu_dma_cookie *cookie = domain->iova_cookie;
> struct iova_domain *iovad = &cookie->iovad;
> dma_addr_t iova, dma_mask = dma_get_mask(dev);
> + bool coherent;
> + int prot;
> +
> + if (attrs & DMA_ATTR_MMIO)
> + return __iommu_dma_map(dev, phys, size,
> + dma_info_to_prot(dir, false, attrs) |
> IOMMU_MMIO,
> + dma_get_mask(dev));
I realize that iommu_dma_map_resource() doesn't today, but shouldn't
this be checking for swiotlb:
if (dev_use_swiotlb(dev, size, dir) &&
iova_unaligned(iovad, phys, size)) {
Except we have to fail for ATTR_MMIO?
Now that we have ATTR_MMIO, should dma_info_to_prot() just handle it
directly instead of open coding the | IOMMU_MMIO and messing with the
coherent attribute?
Jason
|