[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH] xen/mm: move adjustment of claimed pages counters on allocation


  • To: xen-devel@xxxxxxxxxxxxxxxxxxxx
  • From: Roger Pau Monne <roger.pau@xxxxxxxxxx>
  • Date: Tue, 23 Dec 2025 09:15:07 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=9Lg6YZh7pE0TJ+LHAdARxWRraAx+sb0tnkDjhERzLs8=; b=Sinp5HVgDxB4vyIHOSalOutCrkUn9TsAYqlinez/QfheqLpCjzJRT3kUdj1Jec8sWwIdxCFUI+jhSwpxviPMA+MelHcybWA1I/gRDSbg+fSKwogO9VOc2/PQ65H+fLR2ihlgVmao05QxMoEQwVsTC0xxRUtZjAnBTQzM2qCPCYU2wWSrp9W+hTKvc2xNv1FfAqfv4UaiW85hCbYX0tpDItc1SxPY04uHD59qRFNapLzvmZPut7G1r3WhTanC5BqopzVCDp5Wm09rXJSjL3+zr+lQucFfX5evckRwixqahxt+WUd9xd2XNqXsRN0NJWCIzkwHYS/2+YlY6UMaZFuDTg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=HUhFDWBK8ZFmjmCNbe0gjAxPljluajQxUTTToHrMcqG2VEcscnm6QMYWUmg+QviuaH6onvW/Yju9QTf6vRZ1UFZWitsvrEEZt+EnUZENcpnRy2wGSxKtW/EtwyjsJg/fE//giWTx1B6Yhi0Ev36RFFOLqm1OIq/EmBH+TVsxwqqyxXedYmi580/F26izA36vjYh0nEX2X8hfT9p/SqIZgghJiTDOc7sz2tcmrZXYxq/POT4wWYAp3gErPazppRvXRr0L5Hq4VRu4lbpEQEc795gntk4XLDwyFQafaS6DHhh46U5nuQLDslU6Vz8DvLodqFw4K3XZqCP74d2X/LRCOw==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Roger Pau Monne <roger.pau@xxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>
  • Delivery-date: Tue, 23 Dec 2025 08:15:40 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

The current logic splits the update of the amount of available memory in
the system (total_avail_pages) and pending claims into two separately
locked regions.  This leads to a window between counters adjustments where
the result of total_avail_pages - outstanding_claims doesn't reflect the
real amount of free memory available, and can return a negative value due
to total_avail_pages having been updated ahead of outstanding_claims.

Fix by adjusting outstanding_claims and d->outstanding_pages in the same
place where total_avail_pages is updated.  This can possibly lead to the
pages failing to be assigned to the domain later, after they have already
been subtracted from the claimed amount.  Ultimately this would result in a
domain losing part of it's claim, but that's better than the current skew
between total_avail_pages and outstanding_claims.

Fixes: 65c9792df600 ("mmu: Introduce XENMEM_claim_pages (subop of memory ops)")
Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
---
Arguably we could also get rid of domain_adjust_tot_pages() given what it
currently does, which will be a revert of:

1c3b9dd61dab xen: centralize accounting for domain tot_pages

Opinions?  Should it be done in a separate commit, possibly as a clear
revert?  Maybe it's worth keeping the helper in case we need to add more
content there, and it's already introduced anyway.
---
 xen/common/page_alloc.c | 44 +++++++++++++++++++----------------------
 1 file changed, 20 insertions(+), 24 deletions(-)

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 1f67b88a8933..f550b1219f87 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -515,30 +515,6 @@ unsigned long domain_adjust_tot_pages(struct domain *d, 
long pages)
     ASSERT(rspin_is_locked(&d->page_alloc_lock));
     d->tot_pages += pages;
 
-    /*
-     * can test d->outstanding_pages race-free because it can only change
-     * if d->page_alloc_lock and heap_lock are both held, see also
-     * domain_set_outstanding_pages below
-     */
-    if ( !d->outstanding_pages || pages <= 0 )
-        goto out;
-
-    spin_lock(&heap_lock);
-    BUG_ON(outstanding_claims < d->outstanding_pages);
-    if ( d->outstanding_pages < pages )
-    {
-        /* `pages` exceeds the domain's outstanding count. Zero it out. */
-        outstanding_claims -= d->outstanding_pages;
-        d->outstanding_pages = 0;
-    }
-    else
-    {
-        outstanding_claims -= pages;
-        d->outstanding_pages -= pages;
-    }
-    spin_unlock(&heap_lock);
-
-out:
     return d->tot_pages;
 }
 
@@ -1071,6 +1047,26 @@ static struct page_info *alloc_heap_pages(
     total_avail_pages -= request;
     ASSERT(total_avail_pages >= 0);
 
+    if ( d && d->outstanding_pages && !(memflags & MEMF_no_refcount) )
+    {
+        /*
+         * Adjust claims in the same locked region where total_avail_pages is
+         * adjusted, not doing so would lead to a window where the amount of
+         * free memory (avail - claimed) would be incorrect.
+         *
+         * Note that by adjusting the claimed amount here it's possible for
+         * pages to fail to be assigned to the claiming domain while already
+         * having been subtracted from d->outstanding_pages.  Such claimed
+         * amount is then lost, as the pages that fail to be assigned to the
+         * domain are freed without replenishing the claim.
+         */
+        unsigned long outstanding = min(outstanding_claims, request);
+
+        outstanding_claims -= outstanding;
+        BUG_ON(outstanding > d->outstanding_pages);
+        d->outstanding_pages -= outstanding;
+    }
+
     check_low_mem_virq();
 
     if ( d != NULL )
-- 
2.51.0




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.