[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH 2/2] x86/PoD: move increment of entry count


  • To: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Wed, 1 Dec 2021 12:02:02 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ivkhnbVo3ZCtO6SLZl0Vo9tL62ytjpIHAIb5DwwD2SU=; b=Zuyc23ovwhDkp2BkGyoRLyrwsVdX7ypqsBBcqcWXadTsmV7LVqnADQ/p9NmV4T1V0F8mWZ/V6f9RJopgyWfZ3gF7PcUR1+jnwDK2ogxzFbXXfFuS+eBhH08Jg3pQsrFQMIVDNUDPQUoqCXISeS+MjkIXYjIymujELKV3hePA9lu5Rrg56/E+ixU6CSqCc4UoXbjBQAa+0TWGzGcKgot4XrlUbuMdDGfVyYqRiOyhui052Poa7PD7+hduWJB46EehtC6cbEr4YKtG4DtQuUl2sAA54JiLtTxrztGPB6RF9SGUCTX/Ry5et/G3N+vrpLyJ/2T4DspmhpfCoj8tfgTDfw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=cpqTe3gdzd7o0uiY5ZdnN56bEskD5gwkMKpHARtNPmGMzx6QgPv8qCJzS4h5I8yEsc4R8S9wNuOZ9XnWYjuqkxl6mFEUKTDeEUyBOfIPBE1hQBKrKzj2Zx+Ape6Ha3XsqK3R8bByd//U23mPJK8FUoS7ZjXNeGQwbMRZZvZgWQds8oUwG2SBRh9LZnF5c0ofFe6PZ4Wtspva9tJdozN1zOu3caJHqKldby8Ts7PTxPpddEz0a1Gxst+H0xHQPBfg+RD8OtlXPtTDjrtwu3w0IxrPRHJN3XJk5MVlIwfG/Zs36H/lKP8rsHZ+e19KsHJd4i0YT4LUvKejhxdBELomjQ==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>
  • Delivery-date: Wed, 01 Dec 2021 11:02:09 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

When not holding the PoD lock across the entire region covering P2M
update and stats update, the entry count should indicate too large a
value in preference to a too small one, to avoid functions bailing early
when they find the count is zero. Hence increments should happen ahead
of P2M updates, while decrements should happen only after. Deal with the
one place where this hasn't been the case yet.

Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>

--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -1345,19 +1345,15 @@ mark_populate_on_demand(struct domain *d
         }
     }
 
+    pod_lock(p2m);
+    p2m->pod.entry_count += (1UL << order) - pod_count;
+    pod_unlock(p2m);
+
     /* Now, actually do the two-way mapping */
     rc = p2m_set_entry(p2m, gfn, INVALID_MFN, order,
                        p2m_populate_on_demand, p2m->default_access);
     if ( rc == 0 )
-    {
-        pod_lock(p2m);
-        p2m->pod.entry_count += 1UL << order;
-        p2m->pod.entry_count -= pod_count;
-        BUG_ON(p2m->pod.entry_count < 0);
-        pod_unlock(p2m);
-
         ioreq_request_mapcache_invalidate(d);
-    }
     else if ( order )
     {
         /*
@@ -1369,6 +1365,13 @@ mark_populate_on_demand(struct domain *d
                d, gfn_l, order, rc);
         domain_crash(d);
     }
+    else if ( !pod_count )
+    {
+        pod_lock(p2m);
+        BUG_ON(!p2m->pod.entry_count);
+        --p2m->pod.entry_count;
+        pod_unlock(p2m);
+    }
 
 out:
     gfn_unlock(p2m, gfn, order);
When not holding the PoD lock across the entire region covering P2M
update and stats update, the entry count should indicate too large a
value in preference to a too small one, to avoid functions bailing early
when they find the count is zero. Hence increments should happen ahead
of P2M updates, while decrements should happen only after. Deal with the
one place where this hasn't been the case yet.

Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>

--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -1345,19 +1345,15 @@ mark_populate_on_demand(struct domain *d
         }
     }
 
+    pod_lock(p2m);
+    p2m->pod.entry_count += (1UL << order) - pod_count;
+    pod_unlock(p2m);
+
     /* Now, actually do the two-way mapping */
     rc = p2m_set_entry(p2m, gfn, INVALID_MFN, order,
                        p2m_populate_on_demand, p2m->default_access);
     if ( rc == 0 )
-    {
-        pod_lock(p2m);
-        p2m->pod.entry_count += 1UL << order;
-        p2m->pod.entry_count -= pod_count;
-        BUG_ON(p2m->pod.entry_count < 0);
-        pod_unlock(p2m);
-
         ioreq_request_mapcache_invalidate(d);
-    }
     else if ( order )
     {
         /*
@@ -1369,6 +1365,13 @@ mark_populate_on_demand(struct domain *d
                d, gfn_l, order, rc);
         domain_crash(d);
     }
+    else if ( !pod_count )
+    {
+        pod_lock(p2m);
+        BUG_ON(!p2m->pod.entry_count);
+        --p2m->pod.entry_count;
+        pod_unlock(p2m);
+    }
 
 out:
     gfn_unlock(p2m, gfn, order);




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.