[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Upstream Dom0 DRM problems regarding swiotlb



On Wed, Feb 13, 2019 at 2:16 PM Konrad Rzeszutek Wilk
<konrad.wilk@xxxxxxxxxx> wrote:
>
> On Wed, Feb 13, 2019 at 01:38:21PM -0500, Michael Labriola wrote:
> > On Wed, Feb 13, 2019 at 1:16 PM Michael Labriola
> > <michael.d.labriola@xxxxxxxxx> wrote:
> > >
> > > On Wed, Feb 13, 2019 at 11:57 AM Konrad Rzeszutek Wilk
> > > <konrad.wilk@xxxxxxxxxx> wrote:
> > > >
> > > > On Wed, Feb 13, 2019 at 09:09:32AM -0700, Jan Beulich wrote:
> > > > > >>> On 13.02.19 at 17:00, <michael.d.labriola@xxxxxxxxx> wrote:
> > > > > > On Wed, Feb 13, 2019 at 9:28 AM Jan Beulich <JBeulich@xxxxxxxx> 
> > > > > > wrote:
> > > > > >> >>> On 13.02.19 at 15:10, <michael.d.labriola@xxxxxxxxx> wrote:
> > > > > >> > Ah, so this isn't necessarily Xen-specific but rather any 
> > > > > >> > paravirtual
> > > > > >> > guest?  That hadn't crossed my mind.  Is there an easy way to 
> > > > > >> > find out
> > > > > >> > if we're a pv guest in the need_swiotlb conditionals?
> > > > > >>
> > > > > >> There's xen_pv_domain(), but I think xen_swiotlb would be more to
> > > > > >> the point if the check is already to be Xen-specific. There's no 
> > > > > >> generic
> > > > > >> "is PV" predicate that I'm aware of.
> > > > > >
> > > > > > Well, that makes doing conditional code right more difficult.  I
> > > > > > assume since there isn't a generic predicate, and PV isn't new, that
> > > > > > it's absence is by design?  To reign in the temptation to sprinkle
> > > > > > conditional code all over the kernel?  ;-)
> > > > >
> > > > > Well, with lguest gone, Xen is the only PV environment the kernel
> > > > > can run in, afaik at least. I guess to decide between the suggested
> > > > > options or the need for some abstracting macro (or yet something
> > > > > else), you'll really need to ask the driver maintainers. Or simply
> > > > > send a patch their way implementing one of them, and see what
> > > > > their reaction is.
> > > >
> > > > Could you try this out and see if it works and I will send it out:
> > > >
> > *snip*
> > >
> > > Yes, that works for me.  However, I feel like the conditional should
> > > be in drm_get_max_iomem() instead of directly after it everywhere it's
> > > used...  and is just checking xen_pv_domain() enough?  Jan made it
> > > sound like there were possibly other PV cases that would also still
> > > need swiotlb.
> >
> > How about this?  It strcmp's pv_info to see if we're bare metal, does
> > the comparison in a single place, moves the bit shifting comparison
> > into the function (simplifying the drm driver code), and renames the
> > function to more aptly describe what's going on.
>
> <nods> That looks much better.

Great!  Now the only question left is:  KVM, VMware, Xen PVH, Xen HVM,
and Xen PV all populate pv_info.  Do any of those other than Xen PV
*really* need swiotlb.  That's slightly over my head.  As written, my
patch would require swiotlb for all of them because I was attempting
to not be Xen-specific.

> Would love to see this posted upstream!

I'm assuming dri-devel is the appropriate place to post this?

> >
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
> > b/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
> > index 73ad02aea2b2..328d45b8b2ec 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
> > @@ -885,7 +885,7 @@ static int gmc_v6_0_sw_init(void *handle)
> >          pci_set_consistent_dma_mask(adev->pdev, DMA_BIT_MASK(32));
> >          dev_warn(adev->dev, "amdgpu: No coherent DMA available.\n");
> >      }
> > -    adev->need_swiotlb = drm_get_max_iomem() > ((u64)1 << dma_bits);
> > +    adev->need_swiotlb = drm_need_swiotlb_for_dma(dma_bits);
> >
> >      r = gmc_v6_0_init_microcode(adev);
> >      if (r) {
> > diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c
> > b/drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c
> > index 910c4ce19cb3..3d49eff28448 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c
> > @@ -1029,7 +1029,7 @@ static int gmc_v7_0_sw_init(void *handle)
> >          pci_set_consistent_dma_mask(adev->pdev, DMA_BIT_MASK(32));
> >          pr_warn("amdgpu: No coherent DMA available\n");
> >      }
> > -    adev->need_swiotlb = drm_get_max_iomem() > ((u64)1 << dma_bits);
> > +    adev->need_swiotlb = drm_need_swiotlb_for_dma(dma_bits);
> >
> >      r = gmc_v7_0_init_microcode(adev);
> >      if (r) {
> > diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
> > b/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
> > index 747c068379dc..9247dd6316f1 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
> > @@ -1155,7 +1155,7 @@ static int gmc_v8_0_sw_init(void *handle)
> >          pci_set_consistent_dma_mask(adev->pdev, DMA_BIT_MASK(32));
> >          pr_warn("amdgpu: No coherent DMA available\n");
> >      }
> > -    adev->need_swiotlb = drm_get_max_iomem() > ((u64)1 << dma_bits);
> > +    adev->need_swiotlb = drm_need_swiotlb_for_dma(dma_bits);
> >
> >      r = gmc_v8_0_init_microcode(adev);
> >      if (r) {
> > diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
> > b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
> > index f35d7a554ad5..89f3fe981ac5 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
> > @@ -989,7 +989,7 @@ static int gmc_v9_0_sw_init(void *handle)
> >          pci_set_consistent_dma_mask(adev->pdev, DMA_BIT_MASK(32));
> >          printk(KERN_WARNING "amdgpu: No coherent DMA available.\n");
> >      }
> > -    adev->need_swiotlb = drm_get_max_iomem() > ((u64)1 << dma_bits);
> > +    adev->need_swiotlb = drm_need_swiotlb_for_dma(dma_bits);
> >
> >      if (adev->asic_type == CHIP_VEGA20) {
> >          r = gfxhub_v1_1_get_xgmi_info(adev);
> > diff --git a/drivers/gpu/drm/drm_memory.c b/drivers/gpu/drm/drm_memory.c
> > index d69e4fc1ee77..f22f6a0d20b3 100644
> > --- a/drivers/gpu/drm/drm_memory.c
> > +++ b/drivers/gpu/drm/drm_memory.c
> > @@ -35,6 +35,7 @@
> >
> >  #include <linux/highmem.h>
> >  #include <linux/export.h>
> > +#include <xen/xen.h>
> >  #include <drm/drmP.h>
> >  #include "drm_legacy.h"
> >
> > @@ -150,15 +151,24 @@ void drm_legacy_ioremapfree(struct drm_local_map
> > *map, struct drm_device *dev)
> >  }
> >  EXPORT_SYMBOL(drm_legacy_ioremapfree);
> >
> > -u64 drm_get_max_iomem(void)
> > +bool drm_need_swiotlb_for_dma(int dma_bits)
> >  {
> >      struct resource *tmp;
> >      resource_size_t max_iomem = 0;
> >
> > +#ifdef CONFIG_PARAVIRT
> > +    /*
> > +     * Paravirtual hosts require swiotlb regardless of requested dma
> > +     * transfer size.
> > +     */
> > +    if (strcmp(pv_info.name, "bare hardware") != 0)
> > +        return true;
> > +#endif
> > +
> >      for (tmp = iomem_resource.child; tmp; tmp = tmp->sibling) {
> >          max_iomem = max(max_iomem,  tmp->end);
> >      }
> >
> > -    return max_iomem;
> > +    return max_iomem > ((u64)1 << dma_bits);
> >  }
> > -EXPORT_SYMBOL(drm_get_max_iomem);
> > +EXPORT_SYMBOL(drm_need_swiotlb_for_dma);
> > diff --git a/drivers/gpu/drm/radeon/radeon_device.c
> > b/drivers/gpu/drm/radeon/radeon_device.c
> > index 59c8a6647ff2..7c8222d98bc3 100644
> > --- a/drivers/gpu/drm/radeon/radeon_device.c
> > +++ b/drivers/gpu/drm/radeon/radeon_device.c
> > @@ -1387,7 +1387,8 @@ int radeon_device_init(struct radeon_device *rdev,
> >          pci_set_consistent_dma_mask(rdev->pdev, DMA_BIT_MASK(32));
> >          pr_warn("radeon: No coherent DMA available\n");
> >      }
> > -    rdev->need_swiotlb = drm_get_max_iomem() > ((u64)1 << dma_bits);
> > +    rdev->need_swiotlb = drm_need_swiotlb_for_dma(dma_bits);
> > +    pr_info("%s: need_swiotlb: %d\n", __func__, rdev->need_swiotlb);
> >
> >      /* Registers mapping */
> >      /* TODO: block userspace mapping of io register */
> > diff --git a/include/drm/drm_cache.h b/include/drm/drm_cache.h
> > index bfe1639df02d..070b44624ff9 100644
> > --- a/include/drm/drm_cache.h
> > +++ b/include/drm/drm_cache.h
> > @@ -38,7 +38,7 @@
> >  void drm_clflush_pages(struct page *pages[], unsigned long num_pages);
> >  void drm_clflush_sg(struct sg_table *st);
> >  void drm_clflush_virt_range(void *addr, unsigned long length);
> > -u64 drm_get_max_iomem(void);
> > +bool drm_need_swiotlb_for_dma(int dma_bits);
> >
> >
> >  static inline bool drm_arch_can_wc_memory(void)



-- 
Michael D Labriola
21 Rip Van Winkle Cir
Warwick, RI 02886
401-316-9844 (cell)
401-848-8871 (work)
401-234-1306 (home)

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.