[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] RE: [PATCH 1/3] x86-64/MMCFG: correct base address computation for regions not starting at bus 0



>>> On 20.07.11 at 22:33, "Kay, Allen M" <allen.m.kay@xxxxxxxxx> wrote:
> Hi Jan,
> 
> I'm not sure if I understand the patch correctly...
> 
> My understanding of the flow for PCI MMCFG read case is 
> pci_mmcfg_read(...,bus,devfn,...)->pci_dev_base()->get_virt().  If you modify 
> "bus" in get_virt(), isn't pci_mmcfg_read() going to return the config space 
> value of a different bus/devfn than originally intended?

No, as said in the patch description, the modification of "bus" in get_virt()
is to compensate for the adjustment when establishing the mapping (in
mmcfg_ioremap()), which now accounts for start_bus_number.

The problem addressed by this patch is that the base address read from
the ACPI structure is referring to what would be bus 0 (even if bus 0
isn't included in the descriptor's bus range). Just look at the respective
Linux code for another reference.

> Maybe we should just return -EINVAL if it is out of range.

I don't think so - the range checking is already being done as needed,
and we must not preclude there possibly being to descriptors for the
same segment (but different bus ranges).

Jan

> Allen
> 
> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxxxx] 
> Sent: Tuesday, July 19, 2011 1:42 AM
> To: xen-devel@xxxxxxxxxxxxxxxxxxx 
> Cc: Kay, Allen M
> Subject: [PATCH 1/3] x86-64/MMCFG: correct base address computation for 
> regions not starting at bus 0
> 
> As per the specification, the base address reported by ACPI is the one that 
> would be used if the region started at bus 0. Hence the start_bus_number 
> offset needs to be added not only to the virtual address, but also the 
> physical one when establishing the mapping, and it then needs to be 
> subtracted when obtaining the virtual address for doing accesses.
> 
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxxxx>
> 
> --- a/xen/arch/x86/x86_64/mmconfig_64.c
> +++ b/xen/arch/x86/x86_64/mmconfig_64.c
> @@ -25,7 +25,7 @@ struct mmcfg_virt {
>  static struct mmcfg_virt *pci_mmcfg_virt;  static int __initdata 
> mmcfg_pci_segment_shift;
>  
> -static char __iomem *get_virt(unsigned int seg, unsigned bus)
> +static char __iomem *get_virt(unsigned int seg, unsigned int *bus)
>  {
>      struct acpi_mcfg_allocation *cfg;
>      int cfg_num;
> @@ -33,9 +33,11 @@ static char __iomem *get_virt(unsigned i
>      for (cfg_num = 0; cfg_num < pci_mmcfg_config_num; cfg_num++) {
>          cfg = pci_mmcfg_virt[cfg_num].cfg;
>          if (cfg->pci_segment == seg &&
> -            (cfg->start_bus_number <= bus) &&
> -            (cfg->end_bus_number >= bus))
> +            (cfg->start_bus_number <= *bus) &&
> +            (cfg->end_bus_number >= *bus)) {
> +            *bus -= cfg->start_bus_number;
>              return pci_mmcfg_virt[cfg_num].virt;
> +        }
>      }
>  
>      /* Fall back to type 0 */
> @@ -46,7 +48,7 @@ static char __iomem *pci_dev_base(unsign  {
>      char __iomem *addr;
>  
> -    addr = get_virt(seg, bus);
> +    addr = get_virt(seg, &bus);
>      if (!addr)
>          return NULL;
>       return addr + ((bus << 20) | (devfn << 12)); @@ -121,8 +123,11 @@ 
> static 
> void __iomem * __init mcfg_iorema
>      if (virt + size < virt || virt + size > PCI_MCFG_VIRT_END)
>          return NULL;
>  
> -    map_pages_to_xen(virt, cfg->address >> PAGE_SHIFT,
> -                     size >> PAGE_SHIFT, PAGE_HYPERVISOR_NOCACHE);
> +    if (map_pages_to_xen(virt,
> +                         (cfg->address >> PAGE_SHIFT) +
> +                         (cfg->start_bus_number << (20 - PAGE_SHIFT)),
> +                         size >> PAGE_SHIFT, PAGE_HYPERVISOR_NOCACHE))
> +        return NULL;
>  
>      return (void __iomem *) virt;
>  }




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.