[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH 10/11] x86/altp2m: fix log-dirty handling.



Log-dirty, as used to track vram changes, works exclusively on the host p2m.
As a result, when running on any other p2m vram changes aren't tracked
properly and the domain's console display is corrupted.

To fix this, log-dirty pages are never valid in the alternate p2m's, and
if the type of any page in the host p2m is changed, that page is immediately
removed from any alternate p2m in which it was previously valid.

This requires taking the alternate p2m list lock, so to avoid a locking
order violation p2m_change_type_one() must not be called with the host p2m
lock held. This requires a minor change to the exit code flow in the
nested page fault handler, and removing the p2m locking code in
paging_log_dirty_range().

As far as I can tell, removing the latter code is safe since
p2m_change_type_one() acquires a gfn lock on the page before changing it.

With these changes, the alternate p2m nested page fault handler can safely
ignore log-dirty and leave it to be handled in the host p2m nested page
fault handler.

Signed-off-by: Ed White <edmund.h.white@xxxxxxxxx>
---
 xen/arch/x86/hvm/hvm.c   | 4 +++-
 xen/arch/x86/mm/p2m.c    | 4 ++++
 xen/arch/x86/mm/paging.c | 5 -----
 3 files changed, 7 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index afe16bf..18d5987 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -2885,6 +2885,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long 
gla,
     /* Spurious fault? PoD and log-dirty also take this path. */
     if ( p2m_is_ram(p2mt) )
     {
+        rc = 1;
         /*
          * Page log dirty is always done with order 0. If this mfn resides in
          * a large page, we do not change other pages type within that large
@@ -2893,9 +2894,10 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long 
gla,
         if ( npfec.write_access )
         {
             paging_mark_dirty(v->domain, mfn_x(mfn));
+            put_gfn(p2m->domain, gfn);
             p2m_change_type_one(v->domain, gfn, p2m_ram_logdirty, p2m_ram_rw);
+            goto out;
         }
-        rc = 1;
         goto out_put_gfn;
     }
 
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 44bf1ad..843a433 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -793,6 +793,10 @@ int p2m_change_type_one(struct domain *d, unsigned long 
gfn,
 
     gfn_unlock(p2m, gfn, 0);
 
+    if ( pt == ot && altp2mhvm_active(d) )
+        /* make sure this page isn't valid in any alternate p2m */
+        p2m_remove_altp2m_page(d, gfn);
+
     return rc;
 }
 
diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
index 6b788f7..2be68ae 100644
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -574,7 +574,6 @@ void paging_log_dirty_range(struct domain *d,
                            unsigned long nr,
                            uint8_t *dirty_bitmap)
 {
-    struct p2m_domain *p2m = p2m_get_hostp2m(d);
     int i;
     unsigned long pfn;
 
@@ -588,14 +587,10 @@ void paging_log_dirty_range(struct domain *d,
      * switched to read-write.
      */
 
-    p2m_lock(p2m);
-
     for ( i = 0, pfn = begin_pfn; pfn < begin_pfn + nr; i++, pfn++ )
         if ( !p2m_change_type_one(d, pfn, p2m_ram_rw, p2m_ram_logdirty) )
             dirty_bitmap[i >> 3] |= (1 << (i & 7));
 
-    p2m_unlock(p2m);
-
     flush_tlb_mask(d->domain_dirty_cpumask);
 }
 
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.