[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [PATCH v3 03/23] VT-d: limit page table population in domain_pgd_maddr()


  • To: "Beulich, Jan" <JBeulich@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: "Tian, Kevin" <kevin.tian@xxxxxxxxx>
  • Date: Sun, 30 Jan 2022 03:22:27 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=none; dmarc=none; dkim=none; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=GAAfQ0t/HB+jujHSoqjwpqGbPka1aX/PcRlsxw7z8UM=; b=eEflyqEzkftedTogs7WIAFdt3fLQCe9SrEViy3i7JA4aZQqVAWy1kSegFwgC6xwnniCmo9K70V46NtR7VoUOnfBykQXAUYaqCk1M24NrACp7lgCuadTPgC7mEU4zdMNY74SWuDwJCXZrTq4juO4qBi2Y0hjTlkf+I/HqIg6p/+qAgR29GQwSD4d6hwkQ9rYJUgJKaWpm0O/AhJ/eu0rjDctK4UbRtRKxxnfQZ4g0EiLQ9LIYZRhJF5IpiiuabKnIhJ7Y9DI/dy8dVlOiLEsFXTGs/NXA2XltiBulheIWWHmv2ji7czqTyHslpqzkYWG9NLZ0ISg4OpdkLBDE8E+9wA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=FXlFCLxN13uO2uSsI2KpG1/vf2P8VLGdS1g/ObACYGzr25/gSFHCmMC+w0tBgPpn/Abws49/93SeaaBbp4woT3bZ8fDWMgdcxetBYZqz1fsVlKSbgLmFkoPp56yixkmbkoFLIEGYnSS7x9hNkDw74lI+cf8F/bTWd3Di3APGhuZRmUI61gREjCqHB3LB7L1C4x3GbRWnaknwy/uM9WFeB0VUjdpSvNh2G0T6nzRgL45TNBjwPS/M9XbRkIkHbqoSf332D99f9GVs0zcBQHNWoTwrCc/1PwEJEs/MpDMaJXtrupQXUg6OPXi3Qi6xwzyOFpcO5561zzajz2SkLbw7yg==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=intel.com;
  • Cc: "Cooper, Andrew" <andrew.cooper3@xxxxxxxxxx>, Paul Durrant <paul@xxxxxxx>, Pau Monné, Roger <roger.pau@xxxxxxxxxx>
  • Delivery-date: Sun, 30 Jan 2022 03:22:38 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHYBj5sbc5z/CaOZk2avXhw1+QTnqx7BJ4g
  • Thread-topic: [PATCH v3 03/23] VT-d: limit page table population in domain_pgd_maddr()

> From: Jan Beulich <jbeulich@xxxxxxxx>
> Sent: Tuesday, January 11, 2022 12:23 AM
> 
> I have to admit that I never understood why domain_pgd_maddr() wants to
> populate all page table levels for DFN 0. I can only assume that despite
> the comment there what is needed is population just down to the smallest
> possible nr_pt_levels that the loop later in the function may need to
> run to. Hence what is needed is the minimum of all possible
> iommu->nr_pt_levels, to then be passed into addr_to_dma_page_maddr()
> instead of literal 1.
> 
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>

Reviewed-by: Kevin Tian <kevin.tian@xxxxxxxxx>

> ---
> v3: New.
> 
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -55,6 +55,7 @@ bool __read_mostly iommu_snoop = true;
>  #endif
> 
>  static unsigned int __read_mostly nr_iommus;
> +static unsigned int __read_mostly min_pt_levels = UINT_MAX;
> 
>  static struct iommu_ops vtd_ops;
>  static struct tasklet vtd_fault_tasklet;
> @@ -482,8 +483,11 @@ static uint64_t domain_pgd_maddr(struct
>      {
>          if ( !hd->arch.vtd.pgd_maddr )
>          {
> -            /* Ensure we have pagetables allocated down to leaf PTE. */
> -            addr_to_dma_page_maddr(d, 0, 1, NULL, true);
> +            /*
> +             * Ensure we have pagetables allocated down to the smallest
> +             * level the loop below may need to run to.
> +             */
> +            addr_to_dma_page_maddr(d, 0, min_pt_levels, NULL, true);
> 
>              if ( !hd->arch.vtd.pgd_maddr )
>                  return 0;
> @@ -1381,6 +1385,8 @@ int __init iommu_alloc(struct acpi_drhd_
>          return -ENODEV;
>      }
>      iommu->nr_pt_levels = agaw_to_level(agaw);
> +    if ( min_pt_levels > iommu->nr_pt_levels )
> +        min_pt_levels = iommu->nr_pt_levels;
> 
>      if ( !ecap_coherent(iommu->ecap) )
>          vtd_ops.sync_cache = sync_cache;


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.