[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] IOMMU/x86: avoid pages without GFN in page table creation/updating



Jan Beulich wrote on 2015-05-07:
> Handing INVALID_GFN to functions like hd->platform_ops->map_page() just
> can't do any good, and the ioreq server code results in such pages being on 
> the
> list of ones owned by a guest.
> 
> While - as suggested by Tim - we should use get_gfn()/put_gfn() there to
> eliminate races, we really can't due to holding the domain's page alloc lock.
> Ultimately arch_iommu_populate_page_table() may need to be switched to be
> GFN based. Here is what Tim said in this regard:
> "Ideally this loop would be iterating over all gfns in the p2m rather  than 
> over
> all owned MFNs.  As long as needs_iommu gets set first,  such a loop could
> safely be paused and restarted without worrying  about concurrent updates.
> The code sould even stay in this file,  though exposing an iterator from the
> p2m code would be a lot more  efficient."
> 
> Original by Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, using further
> suggestions from Tim Deegan <tim@xxxxxxx>.
> 
> Reported-by: Sander Eikelenboom <linux@xxxxxxxxxxxxxx>
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> Tested-by: Sander Eikelenboom <linux@xxxxxxxxxxxxxx>
> Acked-by: Tim Deegan <tim@xxxxxxx>
> 
> --- a/xen/drivers/passthrough/amd/iommu_map.c
> +++ b/xen/drivers/passthrough/amd/iommu_map.c
> @@ -557,6 +557,10 @@ static int update_paging_mode(struct dom
>      unsigned long old_root_mfn;
>      struct hvm_iommu *hd = domain_hvm_iommu(d);
> 
> +    if ( gfn == INVALID_MFN )
> +        return -EADDRNOTAVAIL;
> +    ASSERT(!(gfn >> DEFAULT_DOMAIN_ADDRESS_WIDTH));
> +
>      level = hd->arch.paging_mode;
>      old_root = hd->arch.root_table;
>      offset = gfn >> (PTE_PER_TABLE_SHIFT * (level - 1)); @@ -729,12
> +733,15 @@ int amd_iommu_unmap_page(struct domain *
>       * we might need a deeper page table for lager gfn now */
>      if ( is_hvm_domain(d) )
>      {
> -        if ( update_paging_mode(d, gfn) )
> +        int rc = update_paging_mode(d, gfn);
> +
> +        if ( rc )
>          {
>              spin_unlock(&hd->arch.mapping_lock);
>              AMD_IOMMU_DEBUG("Update page mode failed gfn = %lx\n",
> gfn);
> -            domain_crash(d);
> -            return -EFAULT;
> +            if ( rc != -EADDRNOTAVAIL )
> +                domain_crash(d);
> +            return rc;
>          }
>      }
> 
> --- a/xen/drivers/passthrough/vtd/iommu.h
> +++ b/xen/drivers/passthrough/vtd/iommu.h
> @@ -482,7 +482,6 @@ struct qinval_entry {  #define
> VTD_PAGE_TABLE_LEVEL_3  3  #define VTD_PAGE_TABLE_LEVEL_4  4
> 
> -#define DEFAULT_DOMAIN_ADDRESS_WIDTH 48  #define
> MAX_IOMMU_REGS 0xc0
> 
>  extern struct list_head acpi_drhd_units;
> --- a/xen/drivers/passthrough/x86/iommu.c
> +++ b/xen/drivers/passthrough/x86/iommu.c
> @@ -59,10 +59,17 @@ int arch_iommu_populate_page_table(struc
>          if ( has_hvm_container_domain(d) ||
>              (page->u.inuse.type_info & PGT_type_mask) ==
> PGT_writable_page )
>          {
> -            BUG_ON(SHARED_M2P(mfn_to_gmfn(d,
> page_to_mfn(page))));
> -            rc = hd->platform_ops->map_page(
> -                d, mfn_to_gmfn(d, page_to_mfn(page)),
> page_to_mfn(page),
> -                IOMMUF_readable|IOMMUF_writable);
> +            unsigned long mfn = page_to_mfn(page);
> +            unsigned long gfn = mfn_to_gmfn(d, mfn);
> +
> +            if ( gfn != INVALID_MFN )
> +            {
> +                ASSERT(!(gfn >> DEFAULT_DOMAIN_ADDRESS_WIDTH));
> +                BUG_ON(SHARED_M2P(gfn));

It seems ASSERT() is unnecessary. BUG_ON() is enough to cover it.

> +                rc = hd->platform_ops->map_page(d, gfn, mfn,
> +
> IOMMUF_readable |
> +
> IOMMUF_writable);
> +            }
>              if ( rc )
>              {
>                  page_list_add(page, &d->page_list);
> --- a/xen/include/asm-x86/hvm/iommu.h
> +++ b/xen/include/asm-x86/hvm/iommu.h
> @@ -46,6 +46,8 @@ struct g2m_ioport {
>      unsigned int np;
>  };
> 
> +#define DEFAULT_DOMAIN_ADDRESS_WIDTH 48
> +
>  struct arch_hvm_iommu
>  {
>      u64 pgd_maddr;                 /* io page directory machine
> address */
> --- a/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h
> +++ b/xen/include/asm-x86/hvm/svm/amd-iommu-defs.h
> @@ -464,8 +464,6 @@
>  #define IOMMU_CONTROL_DISABLED       0
>  #define IOMMU_CONTROL_ENABLED        1
> 
> -#define DEFAULT_DOMAIN_ADDRESS_WIDTH    48
> -
>  /* interrupt remapping table */
>  #define INT_REMAP_ENTRY_REMAPEN_MASK    0x00000001
>  #define INT_REMAP_ENTRY_REMAPEN_SHIFT   0
>

Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.