[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH RFC v2 3/3] x86/altp2m: p2m_altp2m_propagate_change() should honor present page order


  • To: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Tue, 4 Jan 2022 10:49:36 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=EubbiLSoJ7gBkAT814XkFnEd7VOT3g+5pVKJpSG0l1c=; b=Sk/uAZpaHlWrsvY7yXbMbuNMeJdqKVZ2V1MlHhK+aM3T6aofESwMINZEvbxgCCpNkomW2O4MRIAe9HNF3twlaUIfbB8CbzE6jnmkTH31YqgkXnQiK3hJuWEC3LBsMhdIdHGX0DG38Wf7qEecsRuBYR+FvS/1goZs/2YPeAoDCYI/r2EIi+Gu0dx3YQa93bNBkEDFxdHXkraGG0uhsDBgQCajKrauki3DBK2GCpfDNvlhKLxmP4Os1Siwvfk0G78/7NVjwI7042UL5YhzYQmibFqSbwcHs2cNxxXnxCN/LMnuNbeWYrXQ1UZDiq7S/Y+eQ+4IPI40cU1l71fkPWo4+w==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=KswNQ733G5AG0qTy7/Ks91J6+NIyOnIY0cyvVRRdGE7+F3OnDssVSEOH+X6vCNGC5FxwKBoRtctSC0sqJANRVTumXudNxZ79isUP3aSfQXtHcSuQUbGyhjf7f8pIFSmNa3/VP0c9jTZockfpkSKbj3R2wodWt5fqGQfKXiZ4F0cvt+BcZzTeZ9Q+e7vQKFS69gwHJaTGlZKKirqZyJEdyWn336c9INH9hF/O6yywtPnlcDC7NWVB417IsZqEU4UelSqDV9UY94dxyH/CLCPmD++ckOH2x9Th/aMYTRBV0efeOnGMai6KAWGEByqTYYpQOzCBuHN4GZ0CzSX6tr902Q==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Tamas K Lengyel <tamas@xxxxxxxxxxxxx>, Alexandru Isaila <aisaila@xxxxxxxxxxxxxxx>, Petre Pircalabu <ppircalabu@xxxxxxxxxxxxxxx>
  • Delivery-date: Tue, 04 Jan 2022 09:49:45 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

For higher order mappings the comparison against p2m->min_remapped_gfn
needs to take the upper bound of the covered GFN range into account, not
just the base GFN.

Otherwise, i.e. when dropping a mapping and not overlapping the remapped
range, XXX.

Note that there's no need to call get_gfn_type_access() ahead of the
check against the remapped range boundaries: None of its outputs are
needed earlier. Local variables get moved into the more narrow scope to
demonstrate this.

Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
---
I may be entirely wrong and hence that part of the change may also be
wrong, but I'm having trouble seeing why the original
"!mfn_eq(m, INVALID_MFN)" wasn't "mfn_eq(m, INVALID_MFN)". Isn't the
goal there to pre-fill entries that were previously invalid, instead of
undoing prior intentional divergence from the host P2M? (I have
intentionally not reflected this aspect in the description yet; I can't
really write a description of this without understanding what's going on
in case the original code was correct.)

When cur_order is below the passed in page_order, the p2m_set_entry() is
of course not covering the full range. This could be put in a loop, but
locking looks to be a little problematic: If a set of lower order pages
gets re-combined to a large one while already having progressed into the
range, we'd need to carefully back off. Alternatively the full incoming
GFN range could be held locked for the entire loop; this would likely
mean relying on gfn_lock() actually resolving to p2m_lock(). But perhaps
that's not a big problem, considering that core functions like
p2m_get_gfn_type_access() or __put_gfn() assume so, too (because of
not taking the order into account at all)?

--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -2532,9 +2532,6 @@ int p2m_altp2m_propagate_change(struct d
                                 p2m_type_t p2mt, p2m_access_t p2ma)
 {
     struct p2m_domain *p2m;
-    p2m_access_t a;
-    p2m_type_t t;
-    mfn_t m;
     unsigned int i;
     unsigned int reset_count = 0;
     unsigned int last_reset_idx = ~0;
@@ -2551,12 +2548,30 @@ int p2m_altp2m_propagate_change(struct d
             continue;
 
         p2m = d->arch.altp2m_p2m[i];
-        m = get_gfn_type_access(p2m, gfn_x(gfn), &t, &a, 0, NULL);
 
+        if ( !mfn_eq(mfn, INVALID_MFN) )
+        {
+            p2m_access_t a;
+            p2m_type_t t;
+            unsigned int cur_order;
+
+            if ( mfn_eq(get_gfn_type_access(p2m, gfn_x(gfn), &t, &a, 0,
+                                            &cur_order),
+                        INVALID_MFN) )
+            {
+                int rc = p2m_set_entry(p2m, gfn, mfn, min(cur_order, 
page_order),
+                                       p2mt, p2ma);
+
+                /* Best effort: Don't bail on error. */
+                if ( !ret )
+                    ret = rc;
+            }
+
+            __put_gfn(p2m, gfn_x(gfn));
+        }
         /* Check for a dropped page that may impact this altp2m */
-        if ( mfn_eq(mfn, INVALID_MFN) &&
-             gfn_x(gfn) >= p2m->min_remapped_gfn &&
-             gfn_x(gfn) <= p2m->max_remapped_gfn )
+        else if ( gfn_x(gfn) + (1UL << page_order) > p2m->min_remapped_gfn &&
+                  gfn_x(gfn) <= p2m->max_remapped_gfn )
         {
             if ( !reset_count++ )
             {
@@ -2566,8 +2581,6 @@ int p2m_altp2m_propagate_change(struct d
             else
             {
                 /* At least 2 altp2m's impacted, so reset everything */
-                __put_gfn(p2m, gfn_x(gfn));
-
                 for ( i = 0; i < MAX_ALTP2M; i++ )
                 {
                     if ( i == last_reset_idx ||
@@ -2581,16 +2594,6 @@ int p2m_altp2m_propagate_change(struct d
                 break;
             }
         }
-        else if ( !mfn_eq(m, INVALID_MFN) )
-        {
-            int rc = p2m_set_entry(p2m, gfn, mfn, page_order, p2mt, p2ma);
-
-            /* Best effort: Don't bail on error. */
-            if ( !ret )
-                ret = rc;
-        }
-
-        __put_gfn(p2m, gfn_x(gfn));
     }
 
     altp2m_list_unlock(d);




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.