[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v7 5/5] x86/ioreq server: Synchronously reset outstanding p2m_ioreq_server entries when an ioreq server unmaps.





On 3/11/2017 12:59 AM, Andrew Cooper wrote:
On 08/03/17 15:33, Yu Zhang wrote:
After an ioreq server has unmapped, the remaining p2m_ioreq_server
entries need to be reset back to p2m_ram_rw. This patch does this
synchronously by iterating the p2m table.

The synchronous resetting is necessary because we need to guarantee
the p2m table is clean before another ioreq server is mapped. And
since the sweeping of p2m table could be time consuming, it is done
with hypercall continuation.

Signed-off-by: Yu Zhang <yu.c.zhang@xxxxxxxxxxxxxxx>
---
Cc: Paul Durrant <paul.durrant@xxxxxxxxxx>
Cc: Jan Beulich <jbeulich@xxxxxxxx>
Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
Cc: George Dunlap <george.dunlap@xxxxxxxxxxxxx>

changes in v1:
   - This patch is splitted from patch 4 of last version.
   - According to comments from Jan: update the gfn_start for
     when use hypercall continuation to reset the p2m type.
   - According to comments from Jan: use min() to compare gfn_end
     and max mapped pfn in p2m_finish_type_change()
---
  xen/arch/x86/hvm/dm.c          | 43 +++++++++++++++++++++++++++++++++++++-----
  xen/arch/x86/mm/p2m.c          | 29 ++++++++++++++++++++++++++++
  xen/include/asm-x86/p2m.h      |  5 +++++
  xen/include/public/hvm/dm_op.h |  3 +--
  4 files changed, 73 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
index f97478b..a92d5d7 100644
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -288,6 +288,7 @@ static int inject_event(struct domain *d,
      return 0;
  }
+#define DMOP_op_mask 0xff
  static int dm_op(domid_t domid,
                   unsigned int nr_bufs,
                   xen_dm_op_buf_t bufs[])
@@ -315,10 +316,8 @@ static int dm_op(domid_t domid,
      }
rc = -EINVAL;
-    if ( op.pad )
-        goto out;
- switch ( op.op )
+    switch ( op.op & DMOP_op_mask )
Nack to changes like this.  HVMop continuations only existed in this
form because we had to retrofit it to longterm-stable ABIs in the past,
and there were literally no free bits anywhere else.

Thank you, Andrew. Frankly, I'm not surprised to see disagreement from you and Jan
on this interface. :)

Using the HVMop like continuation is a hard choice for me. I saw you removed these
from the DMops, and I also agree that the new interface is much cleaner.

I noticed there are other 2 DMops to continuable, the set_mem_type and the modified_mem. Both definitions of their structure have fields like first_gfn and nr which can be updated to
be used in the hypercall continuation.
But for map_mem_type_to_ioreq_server, however we do not need a gfn number exposed in this interface(and I do not think exposing a gfn is correct), it is only used when an ioreq server detaches from p2m_ioreq_server and the p2m sweeping is not finished. So I changed definition of the xen_dm_op, removed the pad field and extend the size of op to 64 bit(so that we can have
free bits). But I do not think this is an optimal solution either.

Currently, the DMOP ABI isn't set in stone, so you have until code
freeze in 4.9 to make changes.  (i.e. soon now.)  We should also
consider which other DMops might potentially need to become continuable,
and take preventative action before the freeze.

I can only see set_mem_type and modified_mem need to become continuable, and they does
this quite elegantly now.


If we need to make similar changes once the ABI truely is frozen, there
are plenty of free bits in the end of the union which can be used
without breaking the ABI.

Another propose is to change the definition of xen_dm_op_map_mem_type_to_ioreq_server, and extend the flags field from uint32_t to uint64_t and use the upper bits to store the gfn.
Do you think this proposal acceptable?
Besides using some free bits to store the gfn, do we have any other approaches? Any suggestions
would be welcome! :)

Thanks
Yu

~Andrew.



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.