[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses



On Fri, 16 Jul 2021, Roman Skakun wrote:
> > Technically this looks good.  But given that exposing a helper
> > that does either vmalloc_to_page or virt_to_page is one of the
> > never ending MM discussions I don't want to get into that discussion
> > and just keep it local in the DMA code.
> >
> > Are you fine with me applying this version?
> 
> Looks good to me, thanks!
> But, Stefano asked me about using created helper in the
> xen_swiotlb_free_coherent()
> and I created a patch according to this mention.
> 
> We can merge this patch and create a new one for
> xen_swiotlb_free_coherent() later.

Yeah, no worries, I didn't know that exposing dma_common_vaddr_to_page
was problematic.

This patch is fine by me.


> пт, 16 июл. 2021 г. в 12:35, Christoph Hellwig <hch@xxxxxx>:
> >
> > Technically this looks good.  But given that exposing a helper
> > that does either vmalloc_to_page or virt_to_page is one of the
> > never ending MM discussions I don't want to get into that discussion
> > and just keep it local in the DMA code.
> >
> > Are you fine with me applying this version?
> >
> > ---
> > From 40ac971eab89330d6153e7721e88acd2d98833f9 Mon Sep 17 00:00:00 2001
> > From: Roman Skakun <Roman_Skakun@xxxxxxxx>
> > Date: Fri, 16 Jul 2021 11:39:34 +0300
> > Subject: dma-mapping: handle vmalloc addresses in
> >  dma_common_{mmap,get_sgtable}
> >
> > xen-swiotlb can use vmalloc backed addresses for dma coherent allocations
> > and uses the common helpers.  Properly handle them to unbreak Xen on
> > ARM platforms.
> >
> > Fixes: 1b65c4e5a9af ("swiotlb-xen: use xen_alloc/free_coherent_pages")
> > Signed-off-by: Roman Skakun <roman_skakun@xxxxxxxx>
> > Reviewed-by: Andrii Anisov <andrii_anisov@xxxxxxxx>
> > [hch: split the patch, renamed the helpers]
> > Signed-off-by: Christoph Hellwig <hch@xxxxxx>
> > ---
> >  kernel/dma/ops_helpers.c | 12 ++++++++++--
> >  1 file changed, 10 insertions(+), 2 deletions(-)
> >
> > diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c
> > index 910ae69cae77..af4a6ef48ce0 100644
> > --- a/kernel/dma/ops_helpers.c
> > +++ b/kernel/dma/ops_helpers.c
> > @@ -5,6 +5,13 @@
> >   */
> >  #include <linux/dma-map-ops.h>
> >
> > +static struct page *dma_common_vaddr_to_page(void *cpu_addr)
> > +{
> > +       if (is_vmalloc_addr(cpu_addr))
> > +               return vmalloc_to_page(cpu_addr);
> > +       return virt_to_page(cpu_addr);
> > +}
> > +
> >  /*
> >   * Create scatter-list for the already allocated DMA buffer.
> >   */
> > @@ -12,7 +19,7 @@ int dma_common_get_sgtable(struct device *dev, struct 
> > sg_table *sgt,
> >                  void *cpu_addr, dma_addr_t dma_addr, size_t size,
> >                  unsigned long attrs)
> >  {
> > -       struct page *page = virt_to_page(cpu_addr);
> > +       struct page *page = dma_common_vaddr_to_page(cpu_addr);
> >         int ret;
> >
> >         ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
> > @@ -32,6 +39,7 @@ int dma_common_mmap(struct device *dev, struct 
> > vm_area_struct *vma,
> >         unsigned long user_count = vma_pages(vma);
> >         unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
> >         unsigned long off = vma->vm_pgoff;
> > +       struct page *page = dma_common_vaddr_to_page(cpu_addr);
> >         int ret = -ENXIO;
> >
> >         vma->vm_page_prot = dma_pgprot(dev, vma->vm_page_prot, attrs);
> > @@ -43,7 +51,7 @@ int dma_common_mmap(struct device *dev, struct 
> > vm_area_struct *vma,
> >                 return -ENXIO;
> >
> >         return remap_pfn_range(vma, vma->vm_start,
> > -                       page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff,
> > +                       page_to_pfn(page) + vma->vm_pgoff,
> >                         user_count << PAGE_SHIFT, vma->vm_page_prot);
> >  #else
> >         return -ENXIO;
> > --
> > 2.30.2
> >
> 
> 
> -- 
> Best Regards, Roman.
> 

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.