[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4 05/21] IOMMU/x86: restrict IO-APIC mappings for PV Dom0


  • To: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Tue, 3 May 2022 16:50:47 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=gWGdVFOWyKkjmpReZGRkKTWPXr9heReK4U68dZjHxO8=; b=JnDboInax8O1WwjqjFA2Xx7JsZwEkH1d8dbVv7St6nOn3PTSggX48yMXx3mMIXMSbv1CawRxNNrtpnGxrSZZu3FIcmXksOcso47CGM4OOmo+Zy4z0omcxiqxginq3zmZmU+8SJuDbrqMylUWFf8u6023zVqsN9chXi5Edgu62r+TCfNam3FFNHezxI/4rSzUoW6VAZmc+n6e7sUd7al2GKyg0Kyl69XE/O89MvN1j4AuB8PDcZa688OiVBr9yQpyFv6rz3zqaBnpOVwStPVQ4JnMhfNLcC01Pgp5qkduDWyFgVQfV8pfG5tr+lUIrEl+4aUhCR5zJtKKmKbl71sThg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=leNfv2hcFwoa3eebkiecZA3JSqe0zlQYuP4BV0TQJ8eDStlbnTyCU5SnkZwnpuxCIZDchYYVPZetS6YrR+d47O6qZLI+ieq0jygQjN97XPPW7QYdqzaqXXaP5+rpv/SHdEMP9cNHoVpp+qmG6qnSOI8TzA+P8aeVFX5+rNunxBaSPx5gP7oDaHUe99pbAzdsG1SDXcJzaHBp434NE9cADmE1dVo/sk2VzeT2vV06rmU9FzW8sPMq6Lh9vKXK1DEAgKJ+5OKejjJhd6hM7bwlSNDQt639+oGIYFxM0VAIZtyKBpUuewARti1McHdG0EIdScIlw1Y0XoJHcijPolvofg==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Paul Durrant <paul@xxxxxxx>
  • Delivery-date: Tue, 03 May 2022 14:50:53 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 03.05.2022 15:00, Roger Pau Monné wrote:
> On Mon, Apr 25, 2022 at 10:34:23AM +0200, Jan Beulich wrote:
>> While already the case for PVH, there's no reason to treat PV
>> differently here, though of course the addresses get taken from another
>> source in this case. Except that, to match CPU side mappings, by default
>> we permit r/o ones. This then also means we now deal consistently with
>> IO-APICs whose MMIO is or is not covered by E820 reserved regions.
>>
>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>> ---
>> [integrated] v1: Integrate into series.
>> [standalone] v2: Keep IOMMU mappings in sync with CPU ones.
>>
>> --- a/xen/drivers/passthrough/x86/iommu.c
>> +++ b/xen/drivers/passthrough/x86/iommu.c
>> @@ -275,12 +275,12 @@ void iommu_identity_map_teardown(struct
>>      }
>>  }
>>  
>> -static bool __hwdom_init hwdom_iommu_map(const struct domain *d,
>> -                                         unsigned long pfn,
>> -                                         unsigned long max_pfn)
>> +static unsigned int __hwdom_init hwdom_iommu_map(const struct domain *d,
>> +                                                 unsigned long pfn,
>> +                                                 unsigned long max_pfn)
>>  {
>>      mfn_t mfn = _mfn(pfn);
>> -    unsigned int i, type;
>> +    unsigned int i, type, perms = IOMMUF_readable | IOMMUF_writable;
>>  
>>      /*
>>       * Set up 1:1 mapping for dom0. Default to include only conventional RAM
>> @@ -289,44 +289,60 @@ static bool __hwdom_init hwdom_iommu_map
>>       * that fall in unusable ranges for PV Dom0.
>>       */
>>      if ( (pfn > max_pfn && !mfn_valid(mfn)) || xen_in_range(pfn) )
>> -        return false;
>> +        return 0;
>>  
>>      switch ( type = page_get_ram_type(mfn) )
>>      {
>>      case RAM_TYPE_UNUSABLE:
>> -        return false;
>> +        return 0;
>>  
>>      case RAM_TYPE_CONVENTIONAL:
>>          if ( iommu_hwdom_strict )
>> -            return false;
>> +            return 0;
>>          break;
>>  
>>      default:
>>          if ( type & RAM_TYPE_RESERVED )
>>          {
>>              if ( !iommu_hwdom_inclusive && !iommu_hwdom_reserved )
>> -                return false;
>> +                perms = 0;
>>          }
>> -        else if ( is_hvm_domain(d) || !iommu_hwdom_inclusive || pfn > 
>> max_pfn )
>> -            return false;
>> +        else if ( is_hvm_domain(d) )
>> +            return 0;
>> +        else if ( !iommu_hwdom_inclusive || pfn > max_pfn )
>> +            perms = 0;
>>      }
>>  
>>      /* Check that it doesn't overlap with the Interrupt Address Range. */
>>      if ( pfn >= 0xfee00 && pfn <= 0xfeeff )
>> -        return false;
>> +        return 0;
>>      /* ... or the IO-APIC */
>> -    for ( i = 0; has_vioapic(d) && i < d->arch.hvm.nr_vioapics; i++ )
>> -        if ( pfn == PFN_DOWN(domain_vioapic(d, i)->base_address) )
>> -            return false;
>> +    if ( has_vioapic(d) )
>> +    {
>> +        for ( i = 0; i < d->arch.hvm.nr_vioapics; i++ )
>> +            if ( pfn == PFN_DOWN(domain_vioapic(d, i)->base_address) )
>> +                return 0;
>> +    }
>> +    else if ( is_pv_domain(d) )
>> +    {
>> +        /*
>> +         * Be consistent with CPU mappings: Dom0 is permitted to establish 
>> r/o
>> +         * ones there, so it should also have such established for IOMMUs.
>> +         */
>> +        for ( i = 0; i < nr_ioapics; i++ )
>> +            if ( pfn == PFN_DOWN(mp_ioapics[i].mpc_apicaddr) )
>> +                return rangeset_contains_singleton(mmio_ro_ranges, pfn)
>> +                       ? IOMMUF_readable : 0;
> 
> If we really are after consistency with CPU side mappings, we should
> likely take the whole contents of mmio_ro_ranges and d->iomem_caps
> into account, not just the pages belonging to the IO-APIC?
> 
> There could also be HPET pages mapped as RO for PV.

Hmm. This would be a yet bigger functional change, but indeed would further
improve consistency. But shouldn't we then also establish r/w mappings for
stuff in ->iomem_caps but not in mmio_ro_ranges? This would feel like going
too far ...

Jan




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.