[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen: ARM: Support for mapping ECAM PCIe Config Space Specified In Static ACPI Table



On Tue, 20 Dec 2016, Julien Grall wrote:
> Hi Jiandi,
> 
> On 20/12/2016 07:31, Jiandi An wrote:
> > On 12/19/16 07:11, Julien Grall wrote:
> > > 
> > > 
> > > On 19/12/2016 13:20, Jaggi, Manish wrote:
> > > > > On 16/12/2016 15:49, Julien Grall wrote:
> > > > > > On 14/12/16 08:00, Jiandi An wrote:
> > > > > > > Xen currently doesn't map ECAM space specified in static ACPI
> > > > > > > table.
> > > > > > > Seeking opinion on how this should be handled properly.
> > > > > > > Each root complex ECAM region takes up 64K 4K pages (256MB).
> > > > > > > For some platforms there might be multiple root complexes.
> > > > > > > Is the plan to map all at once?Julien has mentioned support
> > > > > > > for mapping ECAM may come when support for PCI passthrough is
> > > > > > > added, is that right? What mechanism will it be? Will Xen or
> > > > > > > dom0 be the one that parses the staic ACPI tables and map the ECAM
> > > > > > > space?
> > > > > > 
> > > > > > For performance reason, each ECAM region would need to be mapped at
> > > > > > once, so the stage-2 page table could take advantage of superpage
> > > > > > (it
> > > > > > will mostly be 2MB).
> > > > > > 
> > > > > > Now, I don't think Xen should map the ECAM region in stage-2 before
> > > > > > hand. All the regions may not be described in the MCFG and I would
> > > > > > like
> > > > > > to see a generic solution.
> > > > > > 
> > > > > > Looking at the code (see pci_create_ecam_create in
> > > > > > drivers/pci/ecam.c),
> > > > > > ioremap is used. I believe the problem is the same for the 2 other
> > > > > > threads you sent ( [1] and [2]).
> > > > > > 
> > > > > > So it might be good to look at hooking up a call to
> > > > > > XENMEM_add_to_physmap_range in ioremap.
> > > > > > 
> > > > > > Any opinions?
> > > > > 
> > > > > I thought a bit more about it and I realized we need to be cautious on
> > > > > how to proceed here.
> > > > > 
> > > > > DOM0 will have a mix of real devices and emulated devices (e.g some
> > > > > part
> > > > > of the GIC). For the emulated devices, DOM0 should not call
> > > > > XENMEM_add_to_physmap_range. However, DOM0 is not aware what is
> > > > > emulated
> > > > > or not, so even the current approach (hooking up in platform device)
> > > > > seems fragile. We rely on Xen to say "this region cannot be mapped".
> > > > > 
> > > >  Why not add support for parsing ACPI tables in Xen, from linux,  as we
> > > > parse dt.
> > > 
> > > Because MMIO can be described in ASL too. I would rather avoid to have a
> > > different behavior depending whether the MMIO has been described in static
> > > table or ASL.
> > > 
> > > Cheers,
> > > 
> > 
> > I also think hooking up a call to XENMEM_add_to_physmap_range in ioremap
> > is not a good approach as ioremap() is commonly called in so many places.
> > It's not ideal to make a check of am I dom0 running under xen every time
> > ioremap() is called.  And Julien also pointed out, not every call to
> > ioremap()
> > needs a stage 2 mapping.
> 
> I think you misunderstood my previous e-mail. Xen cannot differentiate whether
> an MMIO region is being emulated. So if Xen decides to emulate an AMBA device,
> we would be in the same trouble.
> 
> To be clear, in my previous mail I was pointing a drawback of this solution.
> But I believe this is the best way to get the stage-2 mapping correct and
> limiting the size of stage-2 PT for DOM0.

Right. And it wouldn't be a general purpose alternative implementation
of ioremap. It would be only for the benefit of acpi.

In fact, looking at the Linux code, I found
include/acpi/acpi_io.h:acpi_os_ioremap. What we want already exists. We
only need to:

- provide a Xen based implementation of acpi_os_ioremap
- make sure that acpi_os_ioremap is called instead of ioremap in all
  instances we care about

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.