[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 03/10] IOMMU/MMU: enhance the call trees of IOMMU unmapping and mapping



> From: Xu, Quan
> Sent: Friday, April 29, 2016 5:25 PM
> 
> When IOMMU mapping is failed, we issue a best effort rollback, stopping
> IOMMU mapping, unmapping the previous IOMMU maps and then reporting the
> error up to the call trees. When rollback is not feasible (in early
> initialization phase or trade-off of complexity) for the hardware domain,
> we do things on a best effort basis, only throwing out an error message.
> 
> IOMMU unmapping should perhaps continue despite an error, in an attempt
> to do best effort cleanup.
> 
> Signed-off-by: Quan Xu <quan.xu@xxxxxxxxx>
> 
> CC: Keir Fraser <keir@xxxxxxx>
> CC: Jan Beulich <jbeulich@xxxxxxxx>
> CC: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> CC: Jun Nakajima <jun.nakajima@xxxxxxxxx>
> CC: Kevin Tian <kevin.tian@xxxxxxxxx>
> CC: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
> ---
>  xen/arch/x86/mm.c               | 13 ++++++++-----
>  xen/arch/x86/mm/p2m-ept.c       | 27 +++++++++++++++++++++++++--
>  xen/arch/x86/mm/p2m-pt.c        | 24 ++++++++++++++++++++----
>  xen/arch/x86/mm/p2m.c           | 11 +++++++++--
>  xen/drivers/passthrough/iommu.c | 14 +++++++++++++-
>  5 files changed, 75 insertions(+), 14 deletions(-)
> 
> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> index a42097f..427097d 100644
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -2467,7 +2467,7 @@ static int __get_page_type(struct page_info *page, 
> unsigned
> long type,
>                             int preemptible)
>  {
>      unsigned long nx, x, y = page->u.inuse.type_info;
> -    int rc = 0;
> +    int rc = 0, ret = 0;
> 
>      ASSERT(!(type & ~(PGT_type_mask | PGT_pae_xen_l2)));
> 
> @@ -2578,11 +2578,11 @@ static int __get_page_type(struct page_info *page, 
> unsigned
> long type,
>          if ( d && is_pv_domain(d) && unlikely(need_iommu(d)) )
>          {
>              if ( (x & PGT_type_mask) == PGT_writable_page )
> -                iommu_unmap_page(d, mfn_to_gmfn(d, page_to_mfn(page)));
> +                ret = iommu_unmap_page(d, mfn_to_gmfn(d, page_to_mfn(page)));
>              else if ( type == PGT_writable_page )
> -                iommu_map_page(d, mfn_to_gmfn(d, page_to_mfn(page)),
> -                               page_to_mfn(page),
> -                               IOMMUF_readable|IOMMUF_writable);
> +                ret = iommu_map_page(d, mfn_to_gmfn(d, page_to_mfn(page)),
> +                                     page_to_mfn(page),
> +                                     IOMMUF_readable|IOMMUF_writable);
>          }
>      }
> 
> @@ -2599,6 +2599,9 @@ static int __get_page_type(struct page_info *page, 
> unsigned
> long type,
>      if ( (x & PGT_partial) && !(nx & PGT_partial) )
>          put_page(page);
> 
> +    if ( !rc )
> +        rc = ret;
> +
>      return rc;
>  }

I know there were quite some discussions before around above change (sorry I
didn't remember all of them). Just based on mental picture we should return
error where it firstly occurs. However above change looks favoring errors in
later "rc = alloc_page_type" over earlier iommu_map/unmap_page error. Is it
what we want? If there is a reason that we cannot return immediately upon
iommu_map/unmap, it's more reasonable to revert above check like below:

        if ( !ret )
                ret = rc;

        return ret;

> 
> diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
> index 1ed5b47..df87944 100644
> --- a/xen/arch/x86/mm/p2m-ept.c
> +++ b/xen/arch/x86/mm/p2m-ept.c
> @@ -821,6 +821,8 @@ out:
>      if ( needs_sync )
>          ept_sync_domain(p2m);
> 
> +    ret = 0;
> +
>      /* For host p2m, may need to change VT-d page table.*/
>      if ( rc == 0 && p2m_is_hostp2m(p2m) && need_iommu(d) &&
>           need_modify_vtd_table )
> @@ -831,11 +833,29 @@ out:
>          {
>              if ( iommu_flags )
>                  for ( i = 0; i < (1 << order); i++ )
> -                    iommu_map_page(d, gfn + i, mfn_x(mfn) + i, iommu_flags);
> +                {
> +                    rc = iommu_map_page(d, gfn + i, mfn_x(mfn) + i, 
> iommu_flags);
> +
> +                    if ( !ret && unlikely(rc) )

I think you should move check of ret before iommu_map_page, since we
should stop map against any error (either from best-effort unmap error side).

> +                    {
> +                        while ( i-- )
> +                            iommu_unmap_page(d, gfn + i);
> +
> +                        ret = rc;
> +                        break;
> +                    }
> +                }
>              else
>                  for ( i = 0; i < (1 << order); i++ )
> -                    iommu_unmap_page(d, gfn + i);
> +                {
> +                    rc = iommu_unmap_page(d, gfn + i);
> +
> +                    if ( !ret && unlikely(rc) )
> +                        ret = rc;
> +                }
>          }
> +
> +        rc = 0;
>      }
> 
Thanks
Kevin

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.