[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] xen/swiotlb: correct the check for xen_destroy_contiguous_region



On Tue, 28 Apr 2020, Jürgen Groß wrote:
> On 28.04.20 09:33, peng.fan@xxxxxxx wrote:
> > From: Peng Fan <peng.fan@xxxxxxx>
> > 
> > When booting xen on i.MX8QM, met:
> > "
> > [    3.602128] Unable to handle kernel paging request at virtual address
> > 0000000000272d40
> > [    3.610804] Mem abort info:
> > [    3.613905]   ESR = 0x96000004
> > [    3.617332]   EC = 0x25: DABT (current EL), IL = 32 bits
> > [    3.623211]   SET = 0, FnV = 0
> > [    3.626628]   EA = 0, S1PTW = 0
> > [    3.630128] Data abort info:
> > [    3.633362]   ISV = 0, ISS = 0x00000004
> > [    3.637630]   CM = 0, WnR = 0
> > [    3.640955] [0000000000272d40] user address but active_mm is swapper
> > [    3.647983] Internal error: Oops: 96000004 [#1] PREEMPT SMP
> > [    3.654137] Modules linked in:
> > [    3.677285] Hardware name: Freescale i.MX8QM MEK (DT)
> > [    3.677302] Workqueue: events deferred_probe_work_func
> > [    3.684253] imx6q-pcie 5f000000.pcie: PCI host bridge to bus 0000:00
> > [    3.688297] pstate: 60000005 (nZCv daif -PAN -UAO)
> > [    3.688310] pc : xen_swiotlb_free_coherent+0x180/0x1c0
> > [    3.693993] pci_bus 0000:00: root bus resource [bus 00-ff]
> > [    3.701002] lr : xen_swiotlb_free_coherent+0x44/0x1c0
> > "
> > 
> > In xen_swiotlb_alloc_coherent, if !(dev_addr + size - 1 <= dma_mask) or
> > range_straddles_page_boundary(phys, size) are true, it will
> > create contiguous region. So when free, we need to free contiguous
> > region use upper check condition.
> 
> No, this will break PV guests on x86.
> 
> I think there is something wrong with your setup in combination with
> the ARM xen_create_contiguous_region() implementation.
> 
> Stefano?

Let me start by asking Peng a couple of questions:


Peng, could you please add a printk to check which one of the two
conditions is True for you?  Is it (dev_addr + size - 1 > dma_mask) or
range_straddles_page_boundary(phys, size)?

Is hwdev->coherent_dma_mask set for your DMA capable device?

Finally, is this patch supposed to fix the crash you are seeing? If not,
can you tell where is the crash exactly?



Juergen, keep in mind that xen_create_contiguous_region does nothing on
ARM because in dom0 guest_phys == phys. xen_create_contiguous_region
simply sets dma_handle to phys. Whatever condition caused the code to
take the xen_create_contiguous_region branch in
xen_swiotlb_alloc_coherent, it will also cause it to WARN in
xen_swiotlb_free_coherent.


range_straddles_page_boundary should never return True because
guest_phys == phys. That leaves us with the dma_mask check:

  dev_addr + size - 1 <= dma_mask

dev_addr is the dma_handle allocated by xen_alloc_coherent_pages.
dma_mask is either DMA_BIT_MASK(32) or hwdev->coherent_dma_mask.

The implementation of xen_alloc_coherent_pages has been recently changed
to use dma_direct_alloc.


Christoff, does dma_direct_alloc respect hwdev->coherent_dma_mask if
present? Also, can it return highmem pages?



> Juergen
> 
> > 
> > Signed-off-by: Peng Fan <peng.fan@xxxxxxx>
> > ---
> >   drivers/xen/swiotlb-xen.c | 4 ++--
> >   1 file changed, 2 insertions(+), 2 deletions(-)
> > 
> > diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> > index b6d27762c6f8..ab96e468584f 100644
> > --- a/drivers/xen/swiotlb-xen.c
> > +++ b/drivers/xen/swiotlb-xen.c
> > @@ -346,8 +346,8 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t
> > size, void *vaddr,
> >     /* Convert the size to actually allocated. */
> >     size = 1UL << (order + XEN_PAGE_SHIFT);
> >   - if (!WARN_ON((dev_addr + size - 1 > dma_mask) ||
> > -                range_straddles_page_boundary(phys, size)) &&
> > +   if (((dev_addr + size - 1 > dma_mask) ||
> > +       range_straddles_page_boundary(phys, size)) &&
> >         TestClearPageXenRemapped(virt_to_page(vaddr)))
> >             xen_destroy_contiguous_region(phys, order);
> >   
> 

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.