[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH v2] x86/PoD: move increment of entry count


  • To: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Tue, 4 Jan 2022 11:57:43 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=LoA5SyaScT/msHw6SXGRTAvCTjtuPX45uUkGT+64aSA=; b=SqB6t8N9nehWU5jLXBTbkgao6dYC+b+zu7JTP5Vu1nqwP6TDthGjkSyXBag1xTcdJ+voGzhJaSLDHaQoxBxe5jjmcQ/T2OAHPYy7IG+PRIbxZQZZSUHt3P+2u9OfNpoIrJsEFytdnTkoeTXP1hz5QN0bWE/1728zPEIllLAFF4zQpyTNA1TIVwP5ec7FXyUUdOwuaVi8+cfUmj/uueHf4jAbB7xPbvvmrsvPrFAaWsus8VF/LaOyc3XCmHLXvwsaBKQEG6LVJDgKMqghGxUuL37D6T8nNBpe08jTkQt9VW/YULK9lwi2dHxboXRjcwlbEtT9gUNVJ8XnIowwFTn7jg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=A5W8651EDRCskZmwfKbUyr7ApV2Txle+9qZhnp1iCS7mOnn+aI1eC2Nrh/rLDZ5+9kqzc4ug31SkdnCJf5LKkjveiyzofVa2BzmAB1USs0TtJVgeRM1l5WhlhC78cEMrtBOqSGm24qo4NuwrFeNHOs5swfu/+5fg9EUOS3cKjW7VjvGPSwDdFl5DJd82WIfGQ3rzTlPEXlfECIblxwRY8UKdOZYZ7y+oEWXbb2FGQL2lo9VETi2tH/xcx22qeh4wQltqDs9Ph3/eedmybVZhFl4oyvQaEM38LRUgU/45SzkUoPCOcv0WKdScIvQv6g0lVps2rXStL7dPPik9d5LZHg==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>
  • Delivery-date: Tue, 04 Jan 2022 10:58:22 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

When not holding the PoD lock across the entire region covering P2M
update and stats update, the entry count should indicate too large a
value in preference to a too small one, to avoid functions bailing early
when they find the count is zero. Hence increments should happen ahead
of P2M updates, while decrements should happen only after. Deal with the
one place where this hasn't been the case yet.

Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
---
v2: Add comments.
---
While it might be possible to hold the PoD lock over the entire
operation, I didn't want to chance introducing a lock order violation on
a perhaps rarely taken code path.

--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -1342,19 +1342,22 @@ mark_populate_on_demand(struct domain *d
         }
     }
 
+    /*
+     * Without holding the PoD lock across the entire operation, bump the
+     * entry count up front assuming success of p2m_set_entry(), undoing the
+     * bump as necessary upon failure.  Bumping only upon success would risk
+     * code elsewhere observing entry count being zero despite there actually
+     * still being PoD entries.
+     */
+    pod_lock(p2m);
+    p2m->pod.entry_count += (1UL << order) - pod_count;
+    pod_unlock(p2m);
+
     /* Now, actually do the two-way mapping */
     rc = p2m_set_entry(p2m, gfn, INVALID_MFN, order,
                        p2m_populate_on_demand, p2m->default_access);
     if ( rc == 0 )
-    {
-        pod_lock(p2m);
-        p2m->pod.entry_count += 1UL << order;
-        p2m->pod.entry_count -= pod_count;
-        BUG_ON(p2m->pod.entry_count < 0);
-        pod_unlock(p2m);
-
         ioreq_request_mapcache_invalidate(d);
-    }
     else if ( order )
     {
         /*
@@ -1366,6 +1369,14 @@ mark_populate_on_demand(struct domain *d
                d, gfn_l, order, rc);
         domain_crash(d);
     }
+    else if ( !pod_count )
+    {
+        /* Undo earlier increment; see comment above. */
+        pod_lock(p2m);
+        BUG_ON(!p2m->pod.entry_count);
+        --p2m->pod.entry_count;
+        pod_unlock(p2m);
+    }
 
 out:
     gfn_unlock(p2m, gfn, order);




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.