[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 12/19] xen: add PCI MMIO areas to memory map



On Fri, Nov 02, 2018 at 01:37:31PM +0100, Juergen Gross wrote:
> Add possible PCI space MMIO areas as "Reserved" to the memory map in
> order to avoid using those areas for special Xen pages later.

TBH, I'm not sure this is the best way to solve the issues related to
where to map stuff in the physmap without colliding with either
emulated or passed through MMIO regions.

IMO I think the guest should be able to query this from Xen, overall I
would defer this patch until there's a discussion about where to map
stuff safely in the physmap for autotranslated guests.

> Signed-off-by: Juergen Gross <jgross@xxxxxxxx>
> ---
> V4: new patch (Roger Pau Monné)
> ---
>  grub-core/kern/i386/xen/pvh.c | 70 
> +++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 70 insertions(+)
> 
> diff --git a/grub-core/kern/i386/xen/pvh.c b/grub-core/kern/i386/xen/pvh.c
> index 58e6fefd5..442351d1d 100644
> --- a/grub-core/kern/i386/xen/pvh.c
> +++ b/grub-core/kern/i386/xen/pvh.c
> @@ -20,6 +20,7 @@
>  #include <grub/misc.h>
>  #include <grub/memory.h>
>  #include <grub/mm.h>
> +#include <grub/pci.h>
>  #include <grub/i386/cpuid.h>
>  #include <grub/i386/io.h>
>  #include <grub/xen.h>
> @@ -170,6 +171,73 @@ grub_xen_sort_mmap (void)
>      }
>  }
>  
> +static grub_uint64_t
> +grub_xen_pci_read (grub_pci_address_t addr, grub_uint32_t is_64bit)
> +{
> +  grub_uint64_t val;
> +
> +  val = grub_pci_read (addr);
> +  if (is_64bit)
> +    {
> +      addr += sizeof (grub_uint32_t);
> +      val |= ((grub_uint64_t) grub_pci_read (addr)) << 32;
> +    }
> +
> +  return val;
> +}
> +
> +static void
> +grub_xen_pci_write (grub_pci_address_t addr, grub_uint64_t val,
> +                 grub_uint32_t is_64bit)
> +{
> +  grub_pci_write (addr, (grub_uint32_t) val);
> +  if (is_64bit)
> +    {
> +      addr += sizeof (grub_uint32_t);
> +      grub_pci_write (addr, val >> 32);
> +    }
> +}
> +
> +static int
> +grub_xen_pci_mmap (grub_pci_device_t dev,
> +                grub_pci_id_t pciid __attribute__ ((unused)),
> +                void *data __attribute__ ((unused)))
> +{
> +  int reg;
> +  grub_pci_address_t addr;
> +  grub_uint32_t val;
> +  grub_uint64_t mmio_addr, mmio_size;
> +
> +  if (nr_map_entries == ARRAY_SIZE (map))
> +    return 1;
> +
> +  for (reg = GRUB_PCI_REG_ADDRESSES; reg < GRUB_PCI_REG_CIS_POINTER;
> +       reg += sizeof (grub_uint32_t))
> +    {
> +      addr = grub_pci_make_address (dev, reg);
> +      val = grub_pci_read (addr);
> +      if (val == 0 ||
> +       (val & GRUB_PCI_ADDR_SPACE_MASK) != GRUB_PCI_ADDR_SPACE_MEMORY)
> +     continue;
> +
> +      val &= GRUB_PCI_ADDR_MEM_TYPE_MASK;
> +      mmio_addr = grub_xen_pci_read (addr, val);
> +      grub_xen_pci_write (addr, ~0ULL, val);

You should make sure memory decoding is disabled on the command
register before sizing the BARs.

> +      mmio_size = ~(grub_xen_pci_read (addr, val) & ~0x0fULL) + 1;

Isn't there a define for this 0xf value?

> +      grub_xen_pci_write (addr, mmio_addr, val);

I've come across BARs with size 0, which will just waste an entry in
the physmap here.

> +      map[nr_map_entries].type = GRUB_MEMORY_RESERVED;
> +      map[nr_map_entries].addr = mmio_addr;
> +      map[nr_map_entries].len = mmio_size;
> +      nr_map_entries++;

Also, I'm not sure about the size of the map array, but keep in mind
big systems can have a huge amount of PCI devices (and thus BARs) and
could likely overrun the array?

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.