[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 00/11] of: Fix DMA configuration for non-DT masters



On 2019-09-26 11:44 am, Nicolas Saenz Julienne wrote:
Robin, have you looked into supporting multiple dma-ranges? It's the
next thing
we need for BCM STB's PCIe. I'll have a go at it myself if nothing is in
the
works already.

Multiple dma-ranges as far as configuring inbound windows should work
already other than the bug when there's any parent translation. But if
you mean supporting multiple DMA offsets and masks per device in the
DMA API, there's nothing in the works yet.

Sorry, I meant supporting multiple DMA offsets[1]. I think I could still make
it with a single DMA mask though.

The main problem for supporting that case in general is the disgusting carving up of the physical memory map you may have to do to guarantee that a single buffer allocation cannot ever span two windows with different offsets. I don't think we ever reached a conclusion on whether that was even achievable in practice.

There's also the in-between step of making of_dma_get_range() return a
size based on all the dma-ranges entries rather than only the first one
- otherwise, something like [1] can lead to pretty unworkable default
masks. We implemented that when doing acpi_dma_get_range(), it's just
that the OF counterpart never caught up.

Right. I suppose we assume any holes in the ranges are addressable by
the device but won't get used for other reasons (such as no memory
there). However, to be correct, the range of the dma offset plus mask
would need to be within the min start and max end addresses. IOW,
while we need to round up (0xa_8000_0000 - 0x2c1c_0000) to the next
power of 2, the 'correct' thing to do is round down.

IIUC I also have this issue on my list. The RPi4 PCIe block has an integration
bug that only allows DMA to the lower 3GB. With dma-ranges of size 0xc000_0000
you get a 32bit DMA mask wich is not what you need. So far I faked it in the
device-tree but I guess it be better to add an extra check in
of_dma_configure(), decrease the mask and print some kind of warning stating
that DMA addressing is suboptimal.

Yeah, there's just no way for masks to describe that the device can drive all the individual bits, just not in certain combinations :(

The plan I have sketched out there is to merge dma_pfn_offset and bus_dma_mask into a "DMA range" descriptor, so we can then hang one or more of those off a device to properly cope with all these weird interconnects. Conceptually it feels pretty straightforward; I think most of the challenge is in implementing it efficiently. Plus there's the question of whether it could also subsume the dma_mask as well.

Robin.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.