[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH] x86/mm: Add an early PGT_validated exit in _get_page_type()


  • To: Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Fri, 17 Jun 2022 11:47:39 +0100
  • Authentication-results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Jan Beulich <JBeulich@xxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxxxxx>
  • Delivery-date: Fri, 17 Jun 2022 10:48:10 +0000
  • Ironport-data: A9a23:JDivV6oUrG6Hc/aj5tJjasS2pIdeBmJEZRIvgKrLsJaIsI4StFCzt garIBnVM/7fYmr3ct9+ad7g9kMH75/TmtdmQQFp/Ho0FCgT+JuZCYyVIHmrMnLJJKUvbq7GA +byyDXkBJppJpMJjk71atANlVEliefQAOCU5NfsYkidfyc9IMsaoU8lyrRRbrJA24DjWVvT4 I2q/6UzBXf+s9JKGjNMg068gEsHUMTa4Fv0aXRnOJinFHeH/5UkJMp3yZOZdhMUcaENdgKOf M7RzanRw4/s10xF5uVJMFrMWhZirrb6ZWBig5fNMkSoqkAqSicais7XOBeAAKv+Zvrgc91Zk b1wWZKMpQgBFPTRpbVCUyJhUCh+JoZpxuTaI2GZvpnGp6HGWyOEL/RGCUg3OcsT+/ptAHEI/ vsdQNwPRknd3aTsmuv9E7QywJR4RCXoFNp3VnVI5DfVF/s5B7vERL3H/4Rw1zYsnMFeW/3ZY qL1bBIwN0SdOUUSZz/7Drofw/iGgymiIgdd61/Mg7g2/nKL6Qxuhe2F3N39JYXRGJQ9clyjj n3C13T0BFcdLtP34Riv/2+oh+TPtTjmQ49UH7q9ntZ6jVvWymENBRk+UVqgveL/mkO4Q8hYK UEf5mwpt6dayaCwZoCjBVvi+ifC50NCHYoLewEn1O2T4oCN/jvIWWg/d31IaMcNm/FtWD4z8 FDcyrsFGgdTXK2ppWO1r+nJ8G/pYXJNcAfudgdfE1JbvoCLTJUby0uWE409SPPdYsjdQ2mY/ tyckMQpa1z/Z+Yv3r7zw13IiinESnPhHl9svVW/so5IA2pEiG+Zi2+AswGzAQ5odtrxc7V4l CFsdzKixO4PF4qRsyeGXf8AGrqkj97cbmCB3QYzQ8R5rmn3k5JGQWy3yGAWGauUGpxcJW+Bj LH74mu9G6O/zFP1NPQqMupd+uwhzLT6FMSNa804muFmO8ArHCfepXkGTRfJgwjFzRh9+Ylia MzzWZv9Uh4n5VFPkWPeqxE1iuRwmEjTBAr7GPjG8vhQ+ePGPCDNEuxZYQTmgyJQxPrsnTg5O u13b6OioyizmsWnCsUL2eb/9Ww3EEU=
  • Ironport-hdrordr: A9a23:lT/liqHI9VTP7myspLqE5seALOsnbusQ8zAXP0AYc31om6uj5q aTdZUgpHjJYVkqKRIdcLy7V5VoIkmskaKdg7NhX4tKNTOO0ADDQe1fBOPZskTd8kbFltK1u5 0PT0EHMqyUMWRH
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

This is a continuation of the cleanup and commenting in:
  9186e96b199e ("x86/pv: Clean up _get_page_type()")
  8cc5036bc385 ("x86/pv: Fix ABAC cmpxchg() race in _get_page_type()")

With the re-arranged and newly commented logic, it's far more clear that the
second half of _get_page_type() only has work to do for page validation.

Introduce an early exit for PGT_validated.  This makes the fastpath marginally
faster, and simplifies the subsequent logic as it no longer needs to exclude
the fully validated case.

Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
---
CC: Jan Beulich <JBeulich@xxxxxxxx>
CC: Roger Pau Monné <roger.pau@xxxxxxxxxx>
CC: Wei Liu <wl@xxxxxxx>
CC: George Dunlap <george.dunlap@xxxxxxxxxxxxx>

Not that it's relevant, but bloat-o-meter says:

  add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-300 (-300)
  Function                                     old     new   delta
  _get_page_type                              6618    6318    -300

which is more impressive than I was expecting.
---
 xen/arch/x86/mm.c | 70 +++++++++++++++++++++++++++++++------------------------
 1 file changed, 39 insertions(+), 31 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index ac74ae389c99..57751d2ed70f 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -3002,6 +3002,17 @@ static int _get_page_type(struct page_info *page, 
unsigned long type,
      * fully validated (PGT_[type] | PGT_validated | >0).
      */
 
+    /* If the page is fully validated, we're done. */
+    if ( likely(nx & PGT_validated) )
+        return 0;
+
+    /*
+     * The page is in the "validate locked" state.  We have exclusive access,
+     * and any concurrent callers are waiting in the cmpxchg() loop above.
+     *
+     * Exclusive access ends when PGT_validated or PGT_partial get set.
+     */
+
     if ( unlikely((x & PGT_count_mask) == 0) )
     {
         struct domain *d = page_get_owner(page);
@@ -3071,43 +3082,40 @@ static int _get_page_type(struct page_info *page, 
unsigned long type,
         }
     }
 
-    if ( unlikely(!(nx & PGT_validated)) )
+    /*
+     * Flush the cache if there were previously non-coherent mappings of
+     * this page, and we're trying to use it as anything other than a
+     * writeable page.  This forces the page to be coherent before we
+     * validate its contents for safety.
+     */
+    if ( (nx & PGT_non_coherent) && type != PGT_writable_page )
     {
-        /*
-         * Flush the cache if there were previously non-coherent mappings of
-         * this page, and we're trying to use it as anything other than a
-         * writeable page.  This forces the page to be coherent before we
-         * validate its contents for safety.
-         */
-        if ( (nx & PGT_non_coherent) && type != PGT_writable_page )
-        {
-            void *addr = __map_domain_page(page);
+        void *addr = __map_domain_page(page);
 
-            cache_flush(addr, PAGE_SIZE);
-            unmap_domain_page(addr);
+        cache_flush(addr, PAGE_SIZE);
+        unmap_domain_page(addr);
 
-            page->u.inuse.type_info &= ~PGT_non_coherent;
-        }
+        page->u.inuse.type_info &= ~PGT_non_coherent;
+    }
 
-        /*
-         * No special validation needed for writable or shared pages.  Page
-         * tables and GDT/LDT need to have their contents audited.
-         *
-         * per validate_page(), non-atomic updates are fine here.
-         */
-        if ( type == PGT_writable_page || type == PGT_shared_page )
-            page->u.inuse.type_info |= PGT_validated;
-        else
+    /*
+     * No special validation needed for writable or shared pages.  Page
+     * tables and GDT/LDT need to have their contents audited.
+     *
+     * per validate_page(), non-atomic updates are fine here.
+     */
+    if ( type == PGT_writable_page || type == PGT_shared_page )
+        page->u.inuse.type_info |= PGT_validated;
+    else
+    {
+        if ( !(x & PGT_partial) )
         {
-            if ( !(x & PGT_partial) )
-            {
-                page->nr_validated_ptes = 0;
-                page->partial_flags = 0;
-                page->linear_pt_count = 0;
-            }
-
-            rc = validate_page(page, type, preemptible);
+            page->nr_validated_ptes = 0;
+            page->partial_flags = 0;
+            page->linear_pt_count = 0;
         }
+
+        rc = validate_page(page, type, preemptible);
     }
 
  out:
-- 
2.11.0




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.