| 
    
 [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 5/5] xen: arm: handle PCI DT node ranges and interrupt-map properties
 On 18/02/2015 14:37, Ian Campbell wrote: On Wed, 2015-02-18 at 14:19 +0000, Julien Grall wrote: I think so, and we probably should consider the two cases separately since the right answer could reasonably differ for different resource types. I am reasonably convinced that for MMIO (+IO+CFG space) we should map everything as described by the ranges property of the top most node, it can be considered an analogue to / extension of the reg property of that node. Agreed. For IRQ I'm not so sure, it's possible that routing the IRQ at pci_add_device time might be better, or fit in better with e.g. the ACPI architecture, but mapping everything described in interrupt-map at start of day is also an option and a reasonably simple one, probably. I agree that it's simple. Are we sure that we would be able to get a "better" solution later without modifying the kernel? If not, we may need to keep this solution forever. This isn't to do with IPA->PA translations but to do with translations between different PA addressing regimes. i.e. the different addressing schemes of difference busses. I meant bus address. The name "intermediate address" was misused, sorry. Lets say we have a system with a PCI-ROOT device exposing a PCI bus, which in turn contains a PCI-BRIDGE which for the sake of argument lets say is a PCI-FOOBUS bridge. I'm still confused, what prevents the PCI-ROOT device to not be connected to another bus? 
In device tree format, that would give something like:
/ {
  soc {
     ranges = "...";
     pcie {
       ranges = "...";
     }
  }
}
The address retrieved from the PCI-ROOT would be a bus address and not a 
physical address.
Regards, -- Julien Grall _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel 
 
 
  | 
  
![]()  | 
            
         Lists.xenproject.org is hosted with RackSpace, monitoring our  |