[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4 16/21] VT-d: free all-empty page tables


  • To: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Fri, 27 May 2022 09:40:56 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=d3yddbvIOnhvFgG3s8kT9/LJSggHOBAOP5aOkDuNh54=; b=C2ykw9p3kJGR+IIhjgRuyNcJYn9/HnkKJ4t4PqJPOu96VkXVirG0xJkYrgEl5DOip53gVeONKHhNNc1UiDvdE3uPn1ucSnKgJN0vCEqQbDJ8H0uQPguhyqhtmalUq1HqvOrRyXOwGoa4j2NXDYe4qXfDZIDHyWkynWGAWnGx6LlZ1wAMKoQ06aO64obxcVcrULgwKDJ+ciP3iNq8lFG4MIh8EvvyKR7sJutHaegdyoxBaFaqFNOwckrrkznl/dnjIkBqyFyr6A7cPYpyfRS1fGWSx0D7t88Mivo/xJsGpRCMb8J7/C0TEju1W1icUdpi86i3oj26JNlzRd0YDtCjhA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=dP0jRmOuC5laHJi1vHMSzEDpM54wPXeEC1ZcuK5MP8NiuaH0fo38caZhGHSPV42QdztuYnm1Leqrquai3LiFBXj7sZWg+mCuV5/kS0LUO0prXdxySQHGltTvGH/oA6r3gTLFt0lRY+k0rKRrYSwiXMtkZnKZvaKbqmnsTY69lbrI6tGV9WsvEP0+sCH1tUSHpSzxPtv4hAu2SGMbqfscJRGTT6/7LYSy8Jy2A+Z6w93pcIjU4NYPm6sOs9bs6OedVLywlMKjhw+trxhKSzO/PxE1rZbZY+vwQNTwQZ/qbhmL3/AQ0TrnbDmQ9P5Klbbkom0S2chNUPyusFqOKaXjvA==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Paul Durrant <paul@xxxxxxx>, Kevin Tian <kevin.tian@xxxxxxxxx>
  • Delivery-date: Fri, 27 May 2022 07:41:21 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 20.05.2022 13:13, Roger Pau Monné wrote:
> On Wed, May 18, 2022 at 12:26:03PM +0200, Jan Beulich wrote:
>> On 10.05.2022 16:30, Roger Pau Monné wrote:
>>> On Mon, Apr 25, 2022 at 10:42:50AM +0200, Jan Beulich wrote:
>>>> @@ -837,9 +843,31 @@ static int dma_pte_clear_one(struct doma
>>>>  
>>>>      old = *pte;
>>>>      dma_clear_pte(*pte);
>>>> +    iommu_sync_cache(pte, sizeof(*pte));
>>>> +
>>>> +    while ( pt_update_contig_markers(&page->val,
>>>> +                                     address_level_offset(addr, level),
>>>> +                                     level, PTE_kind_null) &&
>>>> +            ++level < min_pt_levels )
>>>> +    {
>>>> +        struct page_info *pg = maddr_to_page(pg_maddr);
>>>> +
>>>> +        unmap_vtd_domain_page(page);
>>>> +
>>>> +        pg_maddr = addr_to_dma_page_maddr(domain, addr, level, 
>>>> flush_flags,
>>>> +                                          false);
>>>> +        BUG_ON(pg_maddr < PAGE_SIZE);
>>>> +
>>>> +        page = map_vtd_domain_page(pg_maddr);
>>>> +        pte = &page[address_level_offset(addr, level)];
>>>> +        dma_clear_pte(*pte);
>>>> +        iommu_sync_cache(pte, sizeof(*pte));
>>>> +
>>>> +        *flush_flags |= IOMMU_FLUSHF_all;
>>>> +        iommu_queue_free_pgtable(hd, pg);
>>>> +    }
>>>
>>> I think I'm setting myself for trouble, but do we need to sync cache
>>> the lower lever entries if higher level ones are to be changed.
>>>
>>> IOW, would it be fine to just flush the highest level modified PTE?
>>> As the lower lever ones won't be reachable anyway.
>>
>> I definitely want to err on the safe side here. If later we can
>> prove that some cache flush is unneeded, I'd be happy to see it
>> go away.
> 
> Hm, so it's not only about adding more cache flushes, but moving them
> inside of the locked region: previously the only cache flush was done
> outside of the locked region.
> 
> I guess I can't convince myself why we would need to flush cache of
> entries that are to be removed, and that also point to pages scheduled
> to be freed.

As previously said - with a series like this I wanted to strictly be
on the safe side, maintaining the pre-existing pattern of all
modifications of live tables being accompanied by a flush (if flushes
are needed in the first place, of course). As to moving flushes into
the locked region, I don't view this as a problem, seeing in
particular that elsewhere we already have flushes with the lock held
(at the very least the _full page_ one in addr_to_dma_page_maddr(),
but also e.g. in intel_iommu_map_page(), where it could be easily
moved past the unlock).

If you (continue to) think that breaking the present pattern isn't
going to misguide future changes, I can certainly drop these not
really necessary flushes. Otoh I was actually considering to,
subsequently, integrate the flushes into e.g. dma_clear_pte() to
make it virtually impossible to break that pattern. This would
imply that all page table related flushes would then occur with the
lock held.

(I won't separately reply to the similar topic on patch 18.)

>>>> @@ -2182,8 +2210,21 @@ static int __must_check cf_check intel_i
>>>>      }
>>>>  
>>>>      *pte = new;
>>>> -
>>>>      iommu_sync_cache(pte, sizeof(struct dma_pte));
>>>> +
>>>> +    /*
>>>> +     * While the (ab)use of PTE_kind_table here allows to save some work 
>>>> in
>>>> +     * the function, the main motivation for it is that it avoids a so far
>>>> +     * unexplained hang during boot (while preparing Dom0) on a Westmere
>>>> +     * based laptop.
>>>> +     */
>>>> +    pt_update_contig_markers(&page->val,
>>>> +                             address_level_offset(dfn_to_daddr(dfn), 
>>>> level),
>>>> +                             level,
>>>> +                             (hd->platform_ops->page_sizes &
>>>> +                              (1UL << level_to_offset_bits(level + 1))
>>>> +                              ? PTE_kind_leaf : PTE_kind_table));
>>>
>>> So this works because on what we believe to be affected models the
>>> only supported page sizes are 4K?
>>
>> Yes.
>>
>>> Do we want to do the same with AMD if we don't allow 512G super pages?
>>
>> Why? They don't have a similar flaw.
> 
> So the question was mostly whether we should also avoid the
> pt_update_contig_markers for 1G entries, because we won't coalesce
> them into a 512G anyway.  IOW avoid the overhead of updating the
> contig markers if we know that the resulting super-page is not
> supported by ->page_sizes.

As the comment says, I consider this at least partly an abuse of
PTE_kind_table, so I'm wary of extending this to AMD. But if you
continue to think it's worth it, I could certainly do so there as
well.

Jan




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.