[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [BUG] After upgrade to Xen 4.12.0 iommu=no-igfx



On Wed, Aug 07, 2019 at 09:08:58AM +0200, Jan Beulich wrote:
> On 06.08.2019 23:48, Roman Shaposhnik wrote:
> > On Tue, Aug 6, 2019 at 9:18 AM Roger Pau Monné <roger.pau@xxxxxxxxxx> wrote:
> > > 
> > > On Fri, Aug 02, 2019 at 10:05:40AM +0200, Roger Pau Monné wrote:
> > > > On Thu, Aug 01, 2019 at 11:25:04AM -0700, Roman Shaposhnik wrote:
> > > > > This patch completely fixes the problem for me!
> > > > > 
> > > > > Thanks Roger! I'd love to see this in Xen 4.13
> > > > 
> > > > Thanks for testing!
> > > > 
> > > > It's still not clear to me why the previous approach didn't work, but
> > > > I think this patch is better because it removes the usage of
> > > > {set/clear}_identity_p2m_entry from PV domains. I will submit this
> > > > formally now.
> > > 
> > > Sorry to bother again, but since we still don't understand why the
> > > previous fix didn't work for you, and I can't reproduce this with my
> > > hardware, could you give the attached patch a try?
> > 
> > No worries -- and thanks for helping to get it over the finish line --
> > this is much appreciated!
> > 
> > I'm happy to say that this latest patch is also working just fine. So
> > I guess this is the one that's going to land in Xen 4.13?
> 
> Not necessarily - the other patch is also a candidate, but its
> description would need to explain what was actually wrong.
> 
> > > AFAICT the only difference between the non-working version and the
> > > working version is the flush, so I've added it here.
> 
> Now I'm afraid I still can't draw a helpful conclusion from Roman's
> successful test: intel_iommu_hwdom_init(), after having called
> setup_hwdom_rmrr(), calls iommu_flush_all() (with one other,
> seemingly innocent call in between). The only conclusion I _could_
> draw is that iommu_flush_all() doesn't do what its name says. Which
> would be quite bad. But
> 
> [orig]
> iommu_flush_all()
> -> iommu_flush_iotlb_global(flush_non_present_entry=0)
> -> flush->iotlb(DMA_TLB_GLOBAL_FLUSH, flush_non_present_entry=0)
> 
> [patch]
> iommu_flush_iotlb_all()
> -> iommu_flush_iotlb(dma_old_pte_present=0, page_count=0)
> -> iommu_flush_iotlb_dsi(flush_non_present_entry=0)
> -> flush->iotlb(DMA_TLB_DSI_FLUSH, flush_non_present_entry=0)
> 
> suggests to me that (as one would infer from the names) is the
> more through flush. I must be overlooking something ...

I went over the iotlb queued invalidation code and it seems fine to
me, I haven't been able to spot any issues with it, and as you say
above the only difference is the flush type (DSI vs GLOBAL).

Again and just to make sure it's actually the flush type (and nothing
in between) the factor that makes this work or break, can you try the
above patch?

Thanks, Roger.
---8<---
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index fef97c82f6..3605614aaf 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1341,7 +1341,7 @@ int set_identity_p2m_entry(struct domain *d, unsigned 
long gfn_l,
 
     if ( !paging_mode_translate(p2m->domain) )
     {
-        if ( !need_iommu_pt_sync(d) )
+        if ( !has_iommu_pt(d) )
             return 0;
         return iommu_legacy_map(d, _dfn(gfn_l), _mfn(gfn_l), PAGE_ORDER_4K,
                                 IOMMUF_readable | IOMMUF_writable);
@@ -1432,7 +1432,7 @@ int clear_identity_p2m_entry(struct domain *d, unsigned 
long gfn_l)
 
     if ( !paging_mode_translate(d) )
     {
-        if ( !need_iommu_pt_sync(d) )
+        if ( !has_iommu_pt(d) )
             return 0;
         return iommu_legacy_unmap(d, _dfn(gfn_l), PAGE_ORDER_4K);
     }
diff --git a/xen/drivers/passthrough/vtd/iommu.c 
b/xen/drivers/passthrough/vtd/iommu.c
index 5d72270c5b..885185ad09 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -2026,7 +2026,7 @@ static int rmrr_identity_mapping(struct domain *d, bool_t 
map,
     mrmrr->count = 1;
     list_add_tail(&mrmrr->list, &hd->arch.mapped_rmrrs);
 
-    return 0;
+    return iommu_flush_all();
 }
 
 static int intel_iommu_add_device(u8 devfn, struct pci_dev *pdev)


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.