[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 3/7] VT-d: Add iommu_lookup_page support



On Wed, Feb 10, 2016 at 10:10:31AM +0000, Malcolm Crossley wrote:
> Function does not need to handle shared EPT use of IOMMU as core code
> already handles this.

Could you mention which part of 'core code' handles this?

Also you may want to say this code can only deal with 4K pages but
not with larger ones.

> 
> Signed-off-by: Malcolm Crossley <malcolm.crossley@xxxxxxxxxx>
> --
> Cc: kevin.tian@xxxxxxxxx
> Cc: feng.wu@xxxxxxxxx
> Cc: xen-devel@xxxxxxxxxxxxx
> ---
>  xen/drivers/passthrough/vtd/iommu.c | 31 +++++++++++++++++++++++++++++++
>  xen/drivers/passthrough/vtd/iommu.h |  1 +
>  2 files changed, 32 insertions(+)
> 
> diff --git a/xen/drivers/passthrough/vtd/iommu.c 
> b/xen/drivers/passthrough/vtd/iommu.c
> index ec31c6b..0c79b48 100644
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -1754,6 +1754,36 @@ static int intel_iommu_unmap_page(struct domain *d, 
> unsigned long gfn)
>      return 0;
>  }
>  
> +static int intel_iommu_lookup_page(
> +    struct domain *d, unsigned long gfn, unsigned long *mfn)
> +{
> +    struct hvm_iommu *hd = domain_hvm_iommu(d);
> +    struct dma_pte *page = NULL, *pte = NULL, old;
> +    u64 pg_maddr;
> +
> +    spin_lock(&hd->arch.mapping_lock);
> +
> +    pg_maddr = addr_to_dma_page_maddr(d, (paddr_t)gfn << PAGE_SHIFT_4K, 1);
> +    if ( pg_maddr == 0 )
> +    {
> +        spin_unlock(&hd->arch.mapping_lock);
> +        return -ENOMEM;
> +    }
> +    page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
> +    pte = page + (gfn & LEVEL_MASK);
> +    old = *pte;
> +    if (!dma_pte_present(old)) {
> +        unmap_vtd_domain_page(page);
> +        spin_unlock(&hd->arch.mapping_lock);
> +        return -ENOMEM;
> +    }
> +    unmap_vtd_domain_page(page);
> +    spin_unlock(&hd->arch.mapping_lock);

All this code looks close to lookup routine in dma_pte_clear_one and
intel_iommu_map_page.

Could you move most of this lookup code in a static function that your
code and the other ones could call?

> +
> +    *mfn = dma_get_pte_addr(old) >> PAGE_SHIFT_4K;
> +    return 0;
> +}
> +
>  void iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte,
>                       int order, int present)
>  {
> @@ -2534,6 +2564,7 @@ const struct iommu_ops intel_iommu_ops = {
>      .teardown = iommu_domain_teardown,
>      .map_page = intel_iommu_map_page,
>      .unmap_page = intel_iommu_unmap_page,
> +    .lookup_page = intel_iommu_lookup_page,
>      .free_page_table = iommu_free_page_table,
>      .reassign_device = reassign_device_ownership,
>      .get_device_group_id = intel_iommu_group_id,
> diff --git a/xen/drivers/passthrough/vtd/iommu.h 
> b/xen/drivers/passthrough/vtd/iommu.h
> index c55ee08..03583ef 100644
> --- a/xen/drivers/passthrough/vtd/iommu.h
> +++ b/xen/drivers/passthrough/vtd/iommu.h
> @@ -275,6 +275,7 @@ struct dma_pte {
>  #define dma_pte_addr(p) ((p).val & PADDR_MASK & PAGE_MASK_4K)
>  #define dma_set_pte_addr(p, addr) do {\
>              (p).val |= ((addr) & PAGE_MASK_4K); } while (0)
> +#define dma_get_pte_addr(p) (((p).val & PAGE_MASK_4K))
>  #define dma_pte_present(p) (((p).val & DMA_PTE_PROT) != 0)
>  #define dma_pte_superpage(p) (((p).val & DMA_PTE_SP) != 0)
>  
> -- 
> 1.7.12.4
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.