[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] intel/iommu: setup inclusive mappings before enabling iommu



On Fri, Sep 14, 2018 at 03:06:03AM -0600, Jan Beulich wrote:
> >>> On 14.09.18 at 10:02, <roger.pau@xxxxxxxxxx> wrote:
> > Or else it can lead to freezes when enabling the iommu on certain
> > Intel hardware:
> > 
> > [...]
> > (XEN) ELF: addresses:
> > (XEN)     virt_base        = 0xffffffff80000000
> > (XEN)     elf_paddr_offset = 0x0
> > (XEN)     virt_offset      = 0xffffffff80000000
> > (XEN)     virt_kstart      = 0xffffffff81000000
> > (XEN)     virt_kend        = 0xffffffff82953000
> > (XEN)     virt_entry       = 0xffffffff8274e180
> > (XEN)     p2m_base         = 0x8000000000
> > (XEN)  Xen  kernel: 64-bit, lsb, compat32
> > (XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x295300
> 
> I think you mean to tell us that output stops after these lines, but
> that's in no way made explicit.

Right, I meant to imply that from the '...lead to freezes...' in the
sentence above, but I will make it explicit by adding a <freezes> tag
at the end of the log.

> > This restores the behavior before commit 66a9274cc3435 that changed
> > the order and enabled the iommu without having the inclusive mappings
> > setup.
> > 
> > Note that in order to restore previous behavior a new enable hook is
> > added to the iommu_ops struct that's only used by VT-d.
> 
> But your earlier series also extends inclusive mapping support to AMD -
> why is there no similar change needed there in case someone overrides
> the default of off in that case?

I don't see any iommu enable related code in amd_iommu_hwdom_init, but
maybe I'm missing something (same applies to ARM SMMU). AFAICT for AMD
the iommu is initialized in iommu_setup which happens before Dom0
creation.

> > --- a/xen/drivers/passthrough/iommu.c
> > +++ b/xen/drivers/passthrough/iommu.c
> > @@ -248,6 +248,9 @@ void __hwdom_init iommu_hwdom_init(struct domain *d)
> >      }
> >  
> >      arch_iommu_hwdom_init(d);
> > +
> > +    if ( hd->platform_ops->enable )
> > +        hd->platform_ops->enable();
> >  }
> 
> I realize this is nothing you change, but this is a __hwdom_init
> function, yet doing the enable only when constructing the "late hwdom"
> is imo too late (the same imo applies to the key handler registration,
> btw). I wonder what the original authors' thoughts here were...

Yes, I agree this is not ideal, but I didn't want to change the
behavior here, since this is a bugfix to restore the previous
functionality.

> While looking at this I also notice that dom0_construct_pvh()'s call to
> iommu_hwdom_init() is unconditional, while dom0_construct_pv()'s is
> conditional. Is this really intentional?

No, I don't think so. AFAICT it should have the same check also
present on the PV Dom0 builder.

But then other logic in the PVH Dom0 builder should also be moved
under such check. For example a PVH Dom0 that's not the hardware
domain shouldn't get a vIOAPIC, access to the native ACPI tables or
the low 1MB and it could even have a flat physmap, as a PVH DomU would
get.

Note that such check also seems to be missing on the ARM Dom0 builder.

> Furthermore, an option without new hook would look to be to call
> arch_iommu_hwdom_init() out of intel_iommu_hwdom_init(). This
> would probably require the function to bail when invoked a second
> time; I'm sure there is a way to recognize this fact.

This was my first approach, but I didn't like it because then I would
either have to move the call to arch_iommu_hwdom_init into the
different hwdom_init hooks, or as you say allow it to be called
multiple times (on Intel hw it would be called twice, while on other
hw only once). I think adding this new hook is the cleaner option IMO.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.