[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH V2] xen-swiotlb: exchange memory with Xen only when pages are contiguous
Commit 4855c92dbb7 "xen-swiotlb: fix the check condition for xen_swiotlb_free_coherent" only fixed memory address check condition on xen_swiotlb_free_coherent(), when memory was not physically contiguous and tried to exchanged with Xen via xen_destroy_contiguous_region which lead to kernel panic. The correct check conditions to make Xen hypercall to revert the memory back from its 32-bit pool are: 1) Above its DMA bit mask (for example 32-bit devices can only address up to 4GB, and we may want 4GB+2K), and 2) It's physically contiguous. Thank you Boris for pointing it out. Fixes: 4855c92dbb7 ("xen-swiotlb: fix the check condition for xen_swiotlb_free_coherent" Signed-off-by: Joe Jin <joe.jin@xxxxxxxxxx> Reported-by: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx> Cc: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> Cc: Christoph Helwig <hch@xxxxxx> Cc: Dongli Zhang <dongli.zhang@xxxxxxxxxx> Cc: John Sobecki <john.sobecki@xxxxxxxxxx> --- drivers/xen/swiotlb-xen.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index f5c1af4ce9ab..aed92fa019f9 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -357,8 +357,8 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr, /* Convert the size to actually allocated. */ size = 1UL << (order + XEN_PAGE_SHIFT); - if (((dev_addr + size - 1 <= dma_mask)) || - range_straddles_page_boundary(phys, size)) + if ((dev_addr + size - 1 <= dma_mask) && + !range_straddles_page_boundary(phys, size)) xen_destroy_contiguous_region(phys, order); xen_free_coherent_pages(hwdev, size, vaddr, (dma_addr_t)phys, attrs); -- 2.17.1 (Apple Git-112) _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |