[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/2] xen: swiotlb: Implement map_resource callback



Hi Stefano,

On 5/2/25 7:20 PM, Stefano Stabellini wrote:
> +Christoph
> 
> On Fri, 2 May 2025, John Ernberg wrote:
>> Needed by the eDMA v3 DMA engine found in iommu-less SoCs such as iMX8QXP
>> to be able to perform DMA operations as a Xen Hardware Domain, which needs
>> to be able to do DMA in MMIO space.

Self-note: The above part of the commit message is a disaster and should 
read something like.

Needed by SoCs without an iommu (such as the iMX8QXP and it's eDMA v3 
engine) that need to be able to perform DMA operations in MMIO space.

>>
>> The callback implementation is basically the same as the one for direct
>> mapping of resources, except this also takes into account Xen page
>> mappings.
>>
>> There is nothing to do for unmap, just like with direct, so the unmap
>> callback is not needed.
>>
>> Signed-off-by: John Ernberg <john.ernberg@xxxxxxxx>
>>
>> ---
>>
>> I originally exported dma_direct_map_resource() and used that which
>> appeared to work, but it felt like not checking Xen page mappings wasn't
>> fully correct and I went with this. But if dma_direct_map_resource() is
>> the correct approach here then I can submit that isntead.
>> ---
>>   drivers/xen/swiotlb-xen.c | 15 +++++++++++++++
>>   1 file changed, 15 insertions(+)
>>
>> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
>> index ef56a2500ed6..0f02fdd9128d 100644
>> --- a/drivers/xen/swiotlb-xen.c
>> +++ b/drivers/xen/swiotlb-xen.c
>> @@ -285,6 +285,20 @@ static void xen_swiotlb_unmap_page(struct device 
>> *hwdev, dma_addr_t dev_addr,
>>                                           attrs, pool);
>>   }
>>
>> +static dma_addr_t xen_swiotlb_map_resource(struct device *dev, phys_addr_t 
>> phys,
>> +                                        size_t size, enum 
>> dma_data_direction dir,
>> +                                        unsigned long attrs)
>> +{
>> +     dma_addr_t dev_addr = xen_phys_to_dma(dev, phys);
> 
> Yes, we need the xen_phys_to_dma call here. This is one of the reasons I
> don't think we can use dma_direct_map_resource() to implement
> map_resource
> 
> 
>> +     BUG_ON(dir == DMA_NONE);
>> +
>> +     if (!dma_capable(dev, dev_addr, size, false))
>> +             return DMA_MAPPING_ERROR;
> 
> But here you also need to check that phys+size doesn't cross a page
> boundary. You need to call range_straddles_page_boundary, like we do at
> the beginning of xen_swiotlb_map_page.
> 
> If it is possible to cross a page boundary, then we need to basically to
> do the same thing here as we do in xen_swiotlb_map_page where we check
> for:
> 
>          if (dma_capable(dev, dev_addr, size, true) &&
>              !range_straddles_page_boundary(phys, size) &&
>                  !xen_arch_need_swiotlb(dev, phys, dev_addr) &&
>                  !is_swiotlb_force_bounce(dev))
>                  goto done;
> 
> if all is well, we go with the native path, otherwise we bounce on
> swiotlb-xen.
> 

I'll preface this with that it's my first deep dive in DMA, so the 
following may entirely be me being stupid:

I'm not sure a straddles page boundary path makes sense.

This mapping is not for a RAM backed address. In the eDMA case for the 
iMX8QXP the `phys` coming in here is the address of a register. I cannot 
see how a swiotlb bounce would fix anything if you end up straddling a 
page boundary. At most I can see a WARN_ON with a DMA_MAPPING_ERROR 
return if such a check would yield true.

Let's say you want to do a DEV_TO_MEM and a check decides you need to 
bounce it you'd bounce an RX register address. I cannot see how that DMA 
would ever end up doing the expected thing.

The eDMA engine will manage the RX/TX registers for an IP block and move 
the data between them and RAM, where the RAM memory is mapped separately 
by dma_map_page (and family). And these can be swiotlb bounced via the 
map page callback with no problem.

Best regards // John Ernberg

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.