[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 09/17] xen/arm: Rework the interface of p2m_cache_flush and use typesafe gfn



On Tue, 28 Jun 2016, Julien Grall wrote:
> p2m_cache_flush is expecting GFNs in parameter and not MFNs. Rename
> the variable to *gfn* and use typesafe to avoid possible misusage.
> 
> Also, modify the prototype of the function to describe the range
> using the start and the number of GFNs. This will avoid to wonder
> whether the end if inclusive or exclusive.
> 
> Note that the type of the parameters 'start' is changed from xen_pfn_t
> (aka uint64_t) to gfn_t (aka unsigned long). This means that a truncation
> will occur for ARM32. It is fine because it will always be encoded on 28
> bits maximum (40 bits address).
> 
> Signed-off-by: Julien Grall <julien.grall@xxxxxxx>

Acked-by: Stefano Stabellini <sstabellini@xxxxxxxxxx>


> ---
>     Changes in v4:
>         - This patch was originally called "xen/arm: p2m_cache_flush:
>         Use the correct terminology and typesafe gfn"
>         - Describe the range using the start and the number of GFNs.
> 
>     Changes in v3:
>         - Add a word in the commit message about the truncation.
> 
>     Changes in v2:
>         - Drop _gfn suffix
> ---
>  xen/arch/arm/domctl.c     |  2 +-
>  xen/arch/arm/p2m.c        | 11 ++++++-----
>  xen/include/asm-arm/p2m.h |  2 +-
>  3 files changed, 8 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
> index 30453d8..f61f98a 100644
> --- a/xen/arch/arm/domctl.c
> +++ b/xen/arch/arm/domctl.c
> @@ -30,7 +30,7 @@ long arch_do_domctl(struct xen_domctl *domctl, struct 
> domain *d,
>          if ( e < s )
>              return -EINVAL;
>  
> -        return p2m_cache_flush(d, s, e);
> +        return p2m_cache_flush(d, _gfn(s), domctl->u.cacheflush.nr_pfns);
>      }
>      case XEN_DOMCTL_bind_pt_irq:
>      {
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 54a363a..1cfb62b 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -1469,16 +1469,17 @@ int relinquish_p2m_mapping(struct domain *d)
>                                d->arch.p2m.default_access);
>  }
>  
> -int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn)
> +int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr)
>  {
>      struct p2m_domain *p2m = &d->arch.p2m;
> +    gfn_t end = gfn_add(start, nr);
>  
> -    start_mfn = MAX(start_mfn, p2m->lowest_mapped_gfn);
> -    end_mfn = MIN(end_mfn, p2m->max_mapped_gfn);
> +    start = gfn_max(start, _gfn(p2m->lowest_mapped_gfn));
> +    end = gfn_min(end, _gfn(p2m->max_mapped_gfn));
>  
>      return apply_p2m_changes(d, CACHEFLUSH,
> -                             pfn_to_paddr(start_mfn),
> -                             pfn_to_paddr(end_mfn),
> +                             pfn_to_paddr(gfn_x(start)),
> +                             pfn_to_paddr(gfn_x(end)),
>                               pfn_to_paddr(mfn_x(INVALID_MFN)),
>                               MATTR_MEM, 0, p2m_invalid,
>                               d->arch.p2m.default_access);
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index f204482..8a96e68 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -139,7 +139,7 @@ void p2m_dump_info(struct domain *d);
>  mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t);
>  
>  /* Clean & invalidate caches corresponding to a region of guest address 
> space */
> -int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t 
> end_mfn);
> +int p2m_cache_flush(struct domain *d, gfn_t start, unsigned long nr);
>  
>  /* Setup p2m RAM mapping for domain d from start-end. */
>  int p2m_populate_ram(struct domain *d, paddr_t start, paddr_t end);
> -- 
> 1.9.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.