[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH 1/2] x86/shadow: slightly consolidate sh_unshadow_for_p2m_change()


  • To: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Thu, 9 Dec 2021 12:26:46 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=byEU016lID7eSKnqJIKIFTYD83U8No8UvINs5+8b2I4=; b=nhiqVwYBVj2wDlt8OHLFgQdr0TJa15Qb8llhfZnIgXz69RxC7XKH19M3E1g9D2bVnEUbHWirQIqsE4fyXif0xF+/6gp3ZhvCXUJBdLiz9JxVUgf26AEbdz57zCdrBmlAnZDVxFYTs1Elra7p8O0llkMUuWolyyJMwd66eM/UsHVFujCFyD8xpHpOleGjrItToEAZLEupF/KBvdackd9kBPRmAC+n241AP+r/TySQfnnIdEG1N9sRjZFZm5omrLng8q/rilxBmhsg73SFhkIBZL9OZpqW9+j2qBwFnP9MnWajdI1jSYpBhPfzcFD2Mos8G4FOn8NAA0N3/oZyaIERZQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=n4GLicznotsbY2j8Cb6G1oVRKHnLZMcrSiRc9zEm7ge9ORZn0T89DxQoCsSRVlsroRH8dVwIWIre+7FHoWzdg8p/B7tTV+R7hqv0bFE8zc54bE636siBYeL9rpULCx5R/GDmwET6qhzv7PHxiw+1jmdeYpml1Wt2KcFABDFoeZ8pJ+wGvb2veQ5EEzV4YoEtx5+w0w/ZORrUw0fq9F0AKGbKE2Dm248jICjzj/KYdgSF1Ymm7P3Ns/9p5GwCMiFo88GH6CocxCQmmNoXIvKT3RUzfmZHUvYPGvZXKRylXF6YrRrhe4KRClqAsKCEovVwJps91vf2yCktwOR0blCN1A==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Tim Deegan <tim@xxxxxxx>
  • Delivery-date: Thu, 09 Dec 2021 11:27:04 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

In preparation for reactivating the presently dead 2M page path of the
function,
- also deal with the case of replacing an L1 page table all in one go,
- pull common checks out of the switch(). This includes extending a
  _PAGE_PRESENT check to L1 as well, which presumably was deemed
  redundant with p2m_is_valid() || p2m_is_grant(), but I think we are
  better off being explicit in all cases,
- replace a p2m_is_ram() check in the 2M case by an explicit
  _PAGE_PRESENT one, to make more obvious that the subsequent
  l1e_get_mfn() actually retrieves something that is actually an MFN.

Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>

--- a/xen/arch/x86/mm/shadow/hvm.c
+++ b/xen/arch/x86/mm/shadow/hvm.c
@@ -801,7 +801,7 @@ static void sh_unshadow_for_p2m_change(s
                                        l1_pgentry_t old, l1_pgentry_t new,
                                        unsigned int level)
 {
-    mfn_t omfn = l1e_get_mfn(old);
+    mfn_t omfn = l1e_get_mfn(old), nmfn;
     unsigned int oflags = l1e_get_flags(old);
     p2m_type_t p2mt = p2m_flags_to_type(oflags);
     bool flush = false;
@@ -813,19 +813,30 @@ static void sh_unshadow_for_p2m_change(s
     if ( unlikely(!d->arch.paging.shadow.total_pages) )
         return;
 
+    /* Only previously present / valid entries need processing. */
+    if ( !(oflags & _PAGE_PRESENT) ||
+         (!p2m_is_valid(p2mt) && !p2m_is_grant(p2mt)) ||
+         !mfn_valid(omfn) )
+        return;
+
+    nmfn = l1e_get_flags(new) & _PAGE_PRESENT ? l1e_get_mfn(new) : INVALID_MFN;
+
     switch ( level )
     {
     default:
         /*
          * The following assertion is to make sure we don't step on 1GB host
-         * page support of HVM guest.
+         * page support of HVM guest. Plus we rely on ->set_entry() to never
+         * get called with orders above PAGE_ORDER_2M, not even to install
+         * non-present entries (which in principle ought to be fine even
+         * without respective large page support).
          */
-        ASSERT(!((oflags & _PAGE_PRESENT) && (oflags & _PAGE_PSE)));
+        ASSERT_UNREACHABLE();
         break;
 
     /* If we're removing an MFN from the p2m, remove it from the shadows too */
     case 1:
-        if ( (p2m_is_valid(p2mt) || p2m_is_grant(p2mt)) && mfn_valid(omfn) )
+        if ( !mfn_eq(nmfn, omfn) )
         {
             sh_remove_all_shadows_and_parents(d, omfn);
             if ( sh_remove_all_mappings(d, omfn, _gfn(gfn)) )
@@ -839,14 +850,9 @@ static void sh_unshadow_for_p2m_change(s
      * scheme, that's OK, but otherwise they must be unshadowed.
      */
     case 2:
-        if ( !(oflags & _PAGE_PRESENT) || !(oflags & _PAGE_PSE) )
-            break;
-
-        if ( p2m_is_valid(p2mt) && mfn_valid(omfn) )
         {
             unsigned int i;
-            mfn_t nmfn = l1e_get_mfn(new);
-            l1_pgentry_t *npte = NULL;
+            l1_pgentry_t *npte = NULL, *opte = NULL;
 
             /* If we're replacing a superpage with a normal L1 page, map it */
             if ( (l1e_get_flags(new) & _PAGE_PRESENT) &&
@@ -854,24 +860,39 @@ static void sh_unshadow_for_p2m_change(s
                  mfn_valid(nmfn) )
                 npte = map_domain_page(nmfn);
 
+            /* If we're replacing a normal L1 page, map it as well. */
+            if ( !(oflags & _PAGE_PSE) )
+                opte = map_domain_page(omfn);
+
             gfn &= ~(L1_PAGETABLE_ENTRIES - 1);
 
             for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
             {
-                if ( !npte ||
-                     !p2m_is_ram(p2m_flags_to_type(l1e_get_flags(npte[i]))) ||
-                     !mfn_eq(l1e_get_mfn(npte[i]), omfn) )
+                if ( opte )
+                {
+                    if ( !(l1e_get_flags(opte[i]) & _PAGE_PRESENT) )
+                        continue;
+                    omfn = l1e_get_mfn(opte[i]);
+                }
+
+                if ( npte )
+                    nmfn = l1e_get_flags(npte[i]) & _PAGE_PRESENT
+                           ? l1e_get_mfn(npte[i]) : INVALID_MFN;
+
+                if ( !mfn_eq(nmfn, omfn) )
                 {
                     /* This GFN->MFN mapping has gone away */
                     sh_remove_all_shadows_and_parents(d, omfn);
                     if ( sh_remove_all_mappings(d, omfn, _gfn(gfn + i)) )
                         flush = true;
                 }
+
                 omfn = mfn_add(omfn, 1);
+                nmfn = mfn_add(nmfn, 1);
             }
 
-            if ( npte )
-                unmap_domain_page(npte);
+            unmap_domain_page(opte);
+            unmap_domain_page(npte);
         }
 
         break;




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.