[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v7 5/9] PCI: Add pci_iomap_wc() variants



On Fri, 2015-06-19 at 15:08 -0700, Luis R. Rodriguez wrote:
> From: "Luis R. Rodriguez" <mcgrof@xxxxxxxx>
> 
> PCI BARs tell us whether prefetching is safe, but they don't say anything
> about write combining (WC).  WC changes ordering rules and allows writes to
> be collapsed, so it's not safe in general to use it on a prefetchable
> region.

Well, the PCIe spec at least specifies that a prefetchable BAR also
tolerates write merging... 

> Add pci_iomap_wc() and pci_iomap_wc_range() so drivers can take advantage
> of write combining when they know it's safe.
> 
> On architectures that don't fully support WC, e.g., x86 without PAT,
> drivers for legacy framebuffers may get some of the benefit by using
> arch_phys_wc_add() in addition to pci_iomap_wc().  But arch_phys_wc_add()
> is unreliable and should be avoided in general.  On x86, it uses MTRRs,
> which are limited in number and size, so the results will vary based on
> driver loading order.
> 
> The goals of adding pci_iomap_wc() are to:
> 
> - Give drivers an architecture-independent way to use WC so they can stop
>   using interfaces like mtrr_add() (on x86, pci_iomap_wc() uses
>   PAT when available)
> 
> - Move toward using _PAGE_CACHE_MODE_UC, not _PAGE_CACHE_MODE_UC_MINUS,
>   on x86 on ioremap_nocache() (see de33c442ed2a ("x86 PAT: fix
>   performance drop for glx, use UC minus for ioremap(), ioremap_nocache()
>   and pci_mmap_page_range()")
> 
> Link: 
> http://lkml.kernel.org/r/1426893517-2511-6-git-send-email-mcgrof@xxxxxxxxxxxxxxxx
> Original-posting: 
> http://lkml.kernel.org/r/1432163293-20965-1-git-send-email-mcgrof@xxxxxxxxxxxxxxxx
> Cc: Toshi Kani <toshi.kani@xxxxxx>
> Cc: Andy Lutomirski <luto@xxxxxxxxxxxxxx>
> Cc: Suresh Siddha <sbsiddha@xxxxxxxxx>
> Cc: Ingo Molnar <mingo@xxxxxxx>
> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
> Cc: Juergen Gross <jgross@xxxxxxxx>
> Cc: Daniel Vetter <daniel.vetter@xxxxxxxx>
> Cc: Dave Airlie <airlied@xxxxxxxxxx>
> Cc: Bjorn Helgaas <bhelgaas@xxxxxxxxxx>
> Cc: Antonino Daplas <adaplas@xxxxxxxxx>
> Cc: Jean-Christophe Plagniol-Villard <plagnioj@xxxxxxxxxxxx>
> Cc: Tomi Valkeinen <tomi.valkeinen@xxxxxx>
> Cc: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx>
> Cc: Arnd Bergmann <arnd@xxxxxxxx>
> Cc: Michael S. Tsirkin <mst@xxxxxxxxxx>
> Cc: venkatesh.pallipadi@xxxxxxxxx
> Cc: Stefan Bader <stefan.bader@xxxxxxxxxxxxx>
> Cc: Ville SyrjÃlà <syrjala@xxxxxx>
> Cc: Mel Gorman <mgorman@xxxxxxx>
> Cc: Vlastimil Babka <vbabka@xxxxxxx>
> Cc: Borislav Petkov <bp@xxxxxxx>
> Cc: Davidlohr Bueso <dbueso@xxxxxxx>
> Cc: konrad.wilk@xxxxxxxxxx
> Cc: ville.syrjala@xxxxxxxxxxxxxxx
> Cc: david.vrabel@xxxxxxxxxx
> Cc: jbeulich@xxxxxxxx
> Cc: Roger Pau Monnà <roger.pau@xxxxxxxxxx>
> Cc: linux-fbdev@xxxxxxxxxxxxxxx
> Cc: linux-kernel@xxxxxxxxxxxxxxx
> Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
> Signed-off-by: Luis R. Rodriguez <mcgrof@xxxxxxxx>
> ---
>  include/asm-generic/pci_iomap.h | 14 ++++++++++
>  lib/pci_iomap.c                 | 61 
> +++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 75 insertions(+)
> 
> diff --git a/include/asm-generic/pci_iomap.h b/include/asm-generic/pci_iomap.h
> index 7389c87..b1e17fc 100644
> --- a/include/asm-generic/pci_iomap.h
> +++ b/include/asm-generic/pci_iomap.h
> @@ -15,9 +15,13 @@ struct pci_dev;
>  #ifdef CONFIG_PCI
>  /* Create a virtual mapping cookie for a PCI BAR (memory or IO) */
>  extern void __iomem *pci_iomap(struct pci_dev *dev, int bar, unsigned long 
> max);
> +extern void __iomem *pci_iomap_wc(struct pci_dev *dev, int bar, unsigned 
> long max);
>  extern void __iomem *pci_iomap_range(struct pci_dev *dev, int bar,
>                                    unsigned long offset,
>                                    unsigned long maxlen);
> +extern void __iomem *pci_iomap_wc_range(struct pci_dev *dev, int bar,
> +                                     unsigned long offset,
> +                                     unsigned long maxlen);
>  /* Create a virtual mapping cookie for a port on a given PCI device.
>   * Do not call this directly, it exists to make it easier for architectures
>   * to override */
> @@ -34,12 +38,22 @@ static inline void __iomem *pci_iomap(struct pci_dev 
> *dev, int bar, unsigned lon
>       return NULL;
>  }
>  
> +static inline void __iomem *pci_iomap_wc(struct pci_dev *dev, int bar, 
> unsigned long max)
> +{
> +     return NULL;
> +}
>  static inline void __iomem *pci_iomap_range(struct pci_dev *dev, int bar,
>                                           unsigned long offset,
>                                           unsigned long maxlen)
>  {
>       return NULL;
>  }
> +static inline void __iomem *pci_iomap_wc_range(struct pci_dev *dev, int bar,
> +                                            unsigned long offset,
> +                                            unsigned long maxlen)
> +{
> +     return NULL;
> +}
>  #endif
>  
>  #endif /* __ASM_GENERIC_IO_H */
> diff --git a/lib/pci_iomap.c b/lib/pci_iomap.c
> index bcce5f1..9604dcb 100644
> --- a/lib/pci_iomap.c
> +++ b/lib/pci_iomap.c
> @@ -52,6 +52,46 @@ void __iomem *pci_iomap_range(struct pci_dev *dev,
>  EXPORT_SYMBOL(pci_iomap_range);
>  
>  /**
> + * pci_iomap_wc_range - create a virtual WC mapping cookie for a PCI BAR
> + * @dev: PCI device that owns the BAR
> + * @bar: BAR number
> + * @offset: map memory at the given offset in BAR
> + * @maxlen: max length of the memory to map
> + *
> + * Using this function you will get a __iomem address to your device BAR.
> + * You can access it using ioread*() and iowrite*(). These functions hide
> + * the details if this is a MMIO or PIO address space and will just do what
> + * you expect from them in the correct way. When possible write combining
> + * is used.
> + *
> + * @maxlen specifies the maximum length to map. If you want to get access to
> + * the complete BAR from offset to the end, pass %0 here.
> + * */
> +void __iomem *pci_iomap_wc_range(struct pci_dev *dev,
> +                              int bar,
> +                              unsigned long offset,
> +                              unsigned long maxlen)
> +{
> +     resource_size_t start = pci_resource_start(dev, bar);
> +     resource_size_t len = pci_resource_len(dev, bar);
> +     unsigned long flags = pci_resource_flags(dev, bar);
> +
> +     if (len <= offset || !start)
> +             return NULL;
> +     len -= offset;
> +     start += offset;
> +     if (maxlen && len > maxlen)
> +             len = maxlen;
> +     if (flags & IORESOURCE_IO)
> +             return NULL;
> +     if (flags & IORESOURCE_MEM)
> +             return ioremap_wc(start, len);
> +     /* What? */
> +     return NULL;
> +}
> +EXPORT_SYMBOL_GPL(pci_iomap_wc_range);
> +
> +/**
>   * pci_iomap - create a virtual mapping cookie for a PCI BAR
>   * @dev: PCI device that owns the BAR
>   * @bar: BAR number
> @@ -70,4 +110,25 @@ void __iomem *pci_iomap(struct pci_dev *dev, int bar, 
> unsigned long maxlen)
>       return pci_iomap_range(dev, bar, 0, maxlen);
>  }
>  EXPORT_SYMBOL(pci_iomap);
> +
> +/**
> + * pci_iomap_wc - create a virtual WC mapping cookie for a PCI BAR
> + * @dev: PCI device that owns the BAR
> + * @bar: BAR number
> + * @maxlen: length of the memory to map
> + *
> + * Using this function you will get a __iomem address to your device BAR.
> + * You can access it using ioread*() and iowrite*(). These functions hide
> + * the details if this is a MMIO or PIO address space and will just do what
> + * you expect from them in the correct way. When possible write combining
> + * is used.
> + *
> + * @maxlen specifies the maximum length to map. If you want to get access to
> + * the complete BAR without checking for its length first, pass %0 here.
> + * */
> +void __iomem *pci_iomap_wc(struct pci_dev *dev, int bar, unsigned long 
> maxlen)
> +{
> +     return pci_iomap_wc_range(dev, bar, 0, maxlen);
> +}
> +EXPORT_SYMBOL_GPL(pci_iomap_wc);
>  #endif /* CONFIG_PCI */



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.