[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 03/11] xen/arm: pass one less argument to dma_cache_maint
On 16/08/2019 14:00, Christoph Hellwig wrote: Instead of taking apart the dma address in both callers do it inside dma_cache_maint itself. Signed-off-by: Christoph Hellwig <hch@xxxxxx> --- arch/arm/xen/mm.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c index 90574d89d0d4..d9da24fda2f7 100644 --- a/arch/arm/xen/mm.c +++ b/arch/arm/xen/mm.c @@ -43,13 +43,15 @@ static bool hypercall_cflush = false;/* functions called by SWIOTLB */ -static void dma_cache_maint(dma_addr_t handle, unsigned long offset,- size_t size, enum dma_data_direction dir, enum dma_cache_op op) +static void dma_cache_maint(dma_addr_t handle, size_t size, + enum dma_data_direction dir, enum dma_cache_op op) { struct gnttab_cache_flush cflush; unsigned long xen_pfn; + unsigned long offset = handle & ~PAGE_MASK; size_t left = size;+ offset &= PAGE_MASK; Ahem... presumably that should be handle, not offset. Robin. xen_pfn = (handle >> XEN_PAGE_SHIFT) + offset / XEN_PAGE_SIZE; offset %= XEN_PAGE_SIZE;@@ -86,13 +88,13 @@ static void dma_cache_maint(dma_addr_t handle, unsigned long offset,static void __xen_dma_page_dev_to_cpu(struct device *hwdev, dma_addr_t handle, size_t size, enum dma_data_direction dir) { - dma_cache_maint(handle & PAGE_MASK, handle & ~PAGE_MASK, size, dir, DMA_UNMAP); + dma_cache_maint(handle, size, dir, DMA_UNMAP); }static void __xen_dma_page_cpu_to_dev(struct device *hwdev, dma_addr_t handle,size_t size, enum dma_data_direction dir) { - dma_cache_maint(handle & PAGE_MASK, handle & ~PAGE_MASK, size, dir, DMA_MAP); + dma_cache_maint(handle, size, dir, DMA_MAP); }void __xen_dma_map_page(struct device *hwdev, struct page *page, _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |