Wednesday 20 July 2011 12:50:25 Ian Campbell wrote:
> On Wed, 2011-07-20 at 10:52 +0100, Wei Wang2 wrote:
> > On Tuesday 19 July 2011 16:14:31 George Dunlap wrote:
> > > Wei,
> > >
> > > Can you be more specific about which BIOSes behave poorly with
> > > per-device intremap tables, and why?
> > We found that, in some case, SATA device uses different device ids for
> > dma remapping and interrupt remapping. Some early BIOSes cannot handle
> > this situation correctly, so if SATA uses device id for DMA to lookup
> > device table entry for intremap table and if intremap table is per-device
> > configured, SATA device won't get the right table.
> Was this issue present in production BIOSes or do you mean early as in
> pre-production? IOW can we drop the support non-share remapping table
> altogether or do we need to fix things in this mode to force the IDT to
> be identical across CPUs (either by resharing the IDT in that case, ick,
> or by enforcing that the contents are the same for devices with this
> OOI was the issue a confusion between the SATA PCI device and the legacy
> PCI IDE facet of the same device?
Yes, using shared intremap table is the workaround for this issue. Ideally,
BIOS should create 2 IVRS entries for SATA devices in IDE combined mode, one
for DMA the other for interrupt. But this setup is not strict compatible with
iommu specification. So recent BIOS should have IDE combined mode disabled in
this case. So I believe that remove the global table is safe from now on. I
could send patches.
> > > The problem with a global intremap table is that, AFAICT, it's not
> > > fundamentally compatible with per-cpu IDTs. With per-cpu IDTs,
> > > different devices may end up with interrupts mapped to different cpus
> > > but the same vector (i.e., device A mapped to cpu 9 vector 67, cpu B
> > > mapped to cpu 12 vector 67). This is by design; the whole point of
> > > the per-cpu IDTs is to avoid restricting the number of IRQs to the
> > > number of vectors. But it seems that the intremap table only maps
> > > vectors, not destination IDs; so in the example above, both devices'
> > > interrupts would end up being remapped to the same place, causing one
> > > device driver to get both sets of interrupts, and the other to get
> > > none.
> > Yes, obviously a problem...Using shared intremap table, devices uses the
> > same vector and delivery mode will end up to the same remapping entry. Is
> > per-cpu IDTs enable by default in Xen?
> I didn't think it was even optional these days, but I didn't check.
> > > Do I understand correctly? If so, it seems like we should switch to
> > > per-device intremap tables by default; and if we're using a global
> > > intremap table, we need to somehow make sure that vectors are not
> > > shared across cpus.
> > I agree to use per-device table by default, since BIOS issue has been
> > fixed and per-device table also has some security advantages.
> > Thanks,
> > Wei
> > > -George
> > >
> > > On Wed, Oct 28, 2009 at 4:32 PM, Wei Wang2 <wei.wang2@xxxxxxx> wrote:
> > > > Using a global interrupt remapping table shared by all devices has
> > > > better compatibility with certain old BIOSes. Per-device interrupt
> > > > remapping table can still be enabled by using a new parameter
> > > > "amd-iommu-perdev-intremap". Thanks,
> > > > Wei
> > > >
> > > > Signed-off-by: Wei Wang <wei.wang2@xxxxxxx>
> > > > --
> > > > AMD GmbH, Germany
> > > > Operating System Research Center
> > > >
> > > > Legal Information:
> > > > Advanced Micro Devices GmbH
> > > > Karl-Hammerschmidt-Str. 34
> > > > 85609 Dornach b. München
> > > >
> > > > Geschäftsführer: Andrew Bowd, Thomas M. McCoy, Giuliano Meroni
> > > > Sitz: Dornach, Gemeinde Aschheim, Landkreis München
> > > > Registergericht München, HRB Nr. 43632
> > > >
> > > > _______________________________________________
> > > > Xen-devel mailing list
> > > > Xen-devel@xxxxxxxxxxxxxxxxxxx
> > > > http://lists.xensource.com/xen-devel
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/xen-devel
Xen-devel mailing list