[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v12 6/6] x86/ioreq server: Synchronously reset outstanding p2m_ioreq_server entries when an ioreq server unmaps.





On 4/28/2017 3:45 PM, Zhang, Xiong Y wrote:
I found this patch couldn't work, the reason is inline.  And need propose to 
fix this.
diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
index 7e0da81..d72b7bd 100644
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -384,15 +384,50 @@ static int dm_op(domid_t domid,

      case XEN_DMOP_map_mem_type_to_ioreq_server:
      {
-        const struct xen_dm_op_map_mem_type_to_ioreq_server *data =
+        struct xen_dm_op_map_mem_type_to_ioreq_server *data =
              &op.u.map_mem_type_to_ioreq_server;
+        unsigned long first_gfn = data->opaque;
+
+        const_op = false;

          rc = -EOPNOTSUPP;
          if ( !hap_enabled(d) )
              break;

-        rc = hvm_map_mem_type_to_ioreq_server(d, data->id,
-                                              data->type, data->flags);
+        if ( first_gfn == 0 )
+            rc = hvm_map_mem_type_to_ioreq_server(d, data->id,
+                                                  data->type,
data->flags);
+        else
+            rc = 0;
+
+        /*
+         * Iterate p2m table when an ioreq server unmaps from
p2m_ioreq_server,
+         * and reset the remaining p2m_ioreq_server entries back to
p2m_ram_rw.
+         */
+        if ( rc == 0 && data->flags == 0 )
+        {
+            struct p2m_domain *p2m = p2m_get_hostp2m(d);
+
+            while ( read_atomic(&p2m->ioreq.entry_count) &&
+                    first_gfn <= p2m->max_mapped_pfn )
+            {
+                /* Iterate p2m table for 256 gfns each time. */
+                p2m_finish_type_change(d, _gfn(first_gfn), 256,
+                                       p2m_ioreq_server,
p2m_ram_rw);
+
+                first_gfn += 256;
+
+                /* Check for continuation if it's not the last iteration. */
+                if ( first_gfn <= p2m->max_mapped_pfn &&
+                     hypercall_preempt_check() )
+                {
+                    rc = -ERESTART;
+                    data->opaque = first_gfn;
+                    break;
+                }
+            }
+        }
+
          break;
      }

diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 4169d18..1d57e5c 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1011,6 +1011,35 @@ void p2m_change_type_range(struct domain *d,
      p2m_unlock(p2m);
  }

+/* Synchronously modify the p2m type for a range of gfns from ot to nt. */
+void p2m_finish_type_change(struct domain *d,
+                            gfn_t first_gfn, unsigned long max_nr,
+                            p2m_type_t ot, p2m_type_t nt)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    p2m_type_t t;
+    unsigned long gfn = gfn_x(first_gfn);
+    unsigned long last_gfn = gfn + max_nr - 1;
+
+    ASSERT(ot != nt);
+    ASSERT(p2m_is_changeable(ot) && p2m_is_changeable(nt));
+
+    p2m_lock(p2m);
+
+    last_gfn = min(last_gfn, p2m->max_mapped_pfn);
+    while ( gfn <= last_gfn )
+    {
+        get_gfn_query_unlocked(d, gfn, &t);
[Zhang, Xiong Y] As the previous patch "asynchronously reset outstanding 
p2m_ioreq_server_entries" call ept_chang_entry_type_global() which
set ept_entry.recalc=1 and ept_entry.emt=MTRR_NUM_TYPES. So
get_gfn_query_unlocked(gfn) will recalc gfn mem_type and return
the new mem_type not the old mem_type.
For pfn is old p2m_ioreq_server mem_type, the returned &t is p2m_raw_rw.
Then (t == ot) couldn't be true, and p2m_change_type_one() never be called.

This result a guest vm using this interface couldn't reboot.

The root cause is in the last version of patch 5/6, that p2m_ram_rw is returned for ioreq server pages whenever there's no mapping ioreq server.
There's no such problem for version 12 and earlier ones.
I have sent Xiong a patch to fix this. Maybe he can send out the fix patch after XenGT tests pass.
BTW, thanks Xiong for helping find this error. :)

Thanks
Yu

[snip]

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.