[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH v3 4/6] swiotlb: Add restricted DMA alloc/free support.



> +#ifdef CONFIG_SWIOTLB
> +     if (unlikely(dev->dma_io_tlb_mem))
> +             return swiotlb_alloc(dev, size, dma_handle, attrs);
> +#endif

Another place where the dma_io_tlb_mem is useful to avoid the ifdef.

> -phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t 
> orig_addr,
> -             size_t mapping_size, size_t alloc_size,
> -             enum dma_data_direction dir, unsigned long attrs)
> +static int swiotlb_tbl_find_free_region(struct device *hwdev,
> +                                     dma_addr_t tbl_dma_addr,
> +                                     size_t alloc_size,
> +                                     unsigned long attrs)

> +static void swiotlb_tbl_release_region(struct device *hwdev, int index,
> +                                    size_t size)

This refactoring should be another prep patch.


> +void *swiotlb_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle,
> +                 unsigned long attrs)

I'd rather have the names convey there are for the per-device bounce
buffer in some form.

> +     struct io_tlb_mem *mem = dev->dma_io_tlb_mem;

While we're at it I wonder if the io_tlb is something we could change
while we're at it.  Maybe replace io_tlb_mem with struct swiotlb
and rename the field in struct device to dev_swiotlb?

> +     int index;
> +     void *vaddr;
> +     phys_addr_t tlb_addr;
> +
> +     size = PAGE_ALIGN(size);
> +     index = swiotlb_tbl_find_free_region(dev, mem->start, size, attrs);
> +     if (index < 0)
> +             return NULL;
> +
> +     tlb_addr = mem->start + (index << IO_TLB_SHIFT);
> +     *dma_handle = phys_to_dma_unencrypted(dev, tlb_addr);
> +
> +     if (!dev_is_dma_coherent(dev)) {
> +             unsigned long pfn = PFN_DOWN(tlb_addr);
> +
> +             /* remove any dirty cache lines on the kernel alias */
> +             arch_dma_prep_coherent(pfn_to_page(pfn), size);

Can we hook in somewhat lower level in the dma-direct code so that all
the remapping in dma-direct can be reused instead of duplicated?  That
also becomes important if we want to use non-remapping uncached support,
e.g. on mips or x86, or the direct changing of the attributes that Will
planned to look into for arm64.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.