[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Disable swiotlb for Dom0



Hi,

On 11/08/2021 15:13, Roman Skakun wrote:
> Also, I added the log in xen_swiotlb_detect() and can see that swiotlb
> still used (other devices within dom0 used too), when dom0 is direct
mapped:
>
> [    1.870363] xen_swiotlb_detect() dev: rcar-fcp,
> XENFEAT_direct_mapped, use swiotlb
> [    1.878352] xen_swiotlb_detect() dev: rcar-fcp,
> XENFEAT_direct_mapped, use swiotlb
> [    1.886309] xen_swiotlb_detect() dev: rcar-fcp,
> XENFEAT_direct_mapped, use swiotlb
>
This means, that all  devices are using swiotlb-xen DMA fops.
> By the way, before applying this patches, dom0 always used swiotlb-xen
> fops for initial domain by design.

>This is expected because your domain is direct mapped.

May be, I don't understand right, Stefano reported the same issue when dom0 is not direct mapped,
but I have direct mapped dom0 and problem still exists.

I am not entirely sure why you think this is the same problem as Stefano. He asked to bypass the swiotlb, but AFAIK, this is not because the buffer get bounced.

Instead, it is because swiotlb-xen on Arm has been relying on its RAM to be direct-mapped (GFN == MFN). With cache coloring, the memory will not be direct-mapped, hence it will be broken.


Ok. Would you be able to provide more information on where the dom0
memory is allocated  and the list of host RAM?

Host memory:
DRAM:  7.9 GiB
Bank #0: 0x048000000 - 0x0bfffffff, 1.9 GiB
Bank #1: 0x500000000 - 0x57fffffff, 2 GiB
Bank #2: 0x600000000 - 0x67fffffff, 2 GiB
Bank #3: 0x700000000 - 0x77fffffff, 2 GiB

dom0 memory map:
(XEN) Allocating 1:1 mappings totalling 2048MB for dom0:
(XEN) BANK[0] 0x00000048000000-0x00000050000000 (128MB)
(XEN) BANK[1] 0x00000058000000-0x000000c0000000 (1664MB)
(XEN) BANK[2] 0x00000510000000-0x00000520000000 (256MB)

Thanks! So you have some memory assigned above 4GB to dom0 as well.

We retrieved dev_addr(64b1d0000)  + size > 32bit mask, but fcp driver
wants to use only  32 bit boundary address, but that's consequence.

Ok. So your device is only capable to do a 32-bit DMA. Is that correct?

Yes.

> I think, the main reason of using bounce buffer is MFN address, not DMA
> phys address.
>
I don't understand this sentence. Can you clarify it?

This address looks like theMFN because I'm using mapped grant tables from domU.

I've added the log and see the following:
with swiotlb:
[   78.620386] dma_map_sg_attrs() dev: rcar-du swiotlb, sg_page: fffffe0001b80000, page_to_phy: b6000000, xen_phys_to_dma: 64b1d0000

without swiotlb (worked fine):
[   74.456426] dma_map_sg_attrs() dev: rcar-du direct map, sg_page: fffffe0001b80000, page_to_phy: b6000000, xen_phys_to_dma:b6000000

I guess, need to figure out why we got a normal dom0 DMA address (b6000000) and why 64b1d0000 when using swiotlb.

So 0xb6000000 is most likely the GFN used to mapped the grant from the domU.

swiotlb-xen on Arm will convert it to the MFN because it is not aware whether the device is behind an IOMMU.

As the address is too high to be handled by the device, swiotlb will try to bounce it. I think it is correct to bounce the page but I am not sure why it can't. What the size of the DMA transaction?

However, even if you disable xen-swiotlb, you are likely going to face the same issue sooner or later because the grant can be mapped anywhere in the memory of dom0 (the balloon code doesn't look to restrict where the memory can be allocated). So it is possible for the grant to be mapped in the dom0 memory above 4GB.

Oleksandr is also looking to provide a safe range which would be outside of the existing RAM. So, I believe, you will have to bounce the DMA buffer unless we always force the grant/foreign mapping to be mapped below 4GB.

Cheers,

--
Julien Grall



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.