[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH 0 of 7] Mem event ring interface setup update, V2



Changes from previous posting
- Added Acked-by Tim Deegan for hypervisor side
- Added Acked-by Olaf Hering, okaying the ABI/API change
- +ve errno value when sanity-checking the port pointer within libxc
- not clearing errno before calling the ring setup ioctl.

Original description follows
------------------------------------------------------------------------------

Update the interface for setting up mem event rings (for sharing, mem access or
paging).

Remove the "shared page", which was a waste of a whole page for a single event
channel port value.

More importantly, both the shared page and the ring page were dom0 user-space
process pages mapped by the hypervisor. If the dom0 process does not clean up,
the hypervisor keeps posting events (and holding a map) to a page now belonging
to another process.

Solutions proposed:
- Pass the event channel port explicitly as part of the domctl payload.
- Reserve a pfn in the guest physmap for a each mem event ring.  Set/retrieve
 these pfns via hvm params. Ensure they are set during build and restore, and
 retrieved during save. Ensure these pages don't leak and domains are left
 zombie.

In all cases mem events consumers in-tree (xenpaging and xen-access) have been
updated.

Updating the interface to deal with these problems requires
backwards-incompatible changes on both the helper<->libxc and
libxc<->hypervisor interfaces.

Take advantage of the interface update to plumb setting up of the sharing ring,
which was missing.

All patches touch x86/mm hypervisor bits. Patches 1, 3 and 5 are tools patches
as well.

Signed-off-by: Andres Lagar-Cavilla <andres@xxxxxxxxxxxxxxxx>
Acked-by: Tim Deegan <tim@xxxxxxx>
Acked-by: Olaf Hering <olaf@xxxxxxxxx>

 tools/libxc/xc_mem_access.c         |  10 +++-
 tools/libxc/xc_mem_event.c          |  12 +++--
 tools/libxc/xc_mem_paging.c         |  10 +++-
 tools/libxc/xenctrl.h               |   6 +-
 tools/tests/xen-access/xen-access.c |  22 +--------
 tools/xenpaging/xenpaging.c         |  18 +------
 tools/xenpaging/xenpaging.h         |   2 +-
 xen/arch/x86/mm/mem_event.c         |  33 +-------------
 xen/include/public/domctl.h         |   4 +-
 xen/include/public/mem_event.h      |   4 -
 xen/include/xen/sched.h             |   2 -
 xen/arch/x86/hvm/hvm.c              |  48 ++++++++++++++++-----
 xen/include/asm-x86/hvm/hvm.h       |   7 +++
 tools/libxc/xc_domain_restore.c     |  42 ++++++++++++++++++
 tools/libxc/xc_domain_save.c        |  36 ++++++++++++++++
 tools/libxc/xc_hvm_build.c          |  21 ++++++--
 tools/libxc/xc_mem_access.c         |   6 +-
 tools/libxc/xc_mem_event.c          |   3 +-
 tools/libxc/xc_mem_paging.c         |   6 +-
 tools/libxc/xenctrl.h               |   8 +--
 tools/libxc/xg_save_restore.h       |   4 +
 tools/tests/xen-access/xen-access.c |  83 +++++++++++++++++-------------------
 tools/xenpaging/xenpaging.c         |  52 ++++++++++++++++------
 xen/arch/x86/mm/mem_event.c         |  50 ++++++++++------------
 xen/include/public/domctl.h         |   1 -
 xen/include/public/hvm/params.h     |   7 ++-
 xen/include/xen/sched.h             |   1 +
 xen/arch/x86/mm/mem_event.c         |  41 ++++++++++++++++++
 xen/include/public/domctl.h         |  20 ++++++++-
 xen/include/xen/sched.h             |   3 +
 tools/libxc/xc_memshr.c             |  25 +++++++++++
 tools/libxc/xenctrl.h               |   5 ++
 xen/arch/x86/mm/mem_event.c         |  11 ++++
 xen/common/domain.c                 |   3 +
 xen/include/asm-arm/mm.h            |   3 +-
 xen/include/asm-ia64/mm.h           |   3 +-
 xen/include/asm-x86/mm.h            |   2 +
 xen/arch/x86/mm/mem_event.c         |   6 +-
 38 files changed, 412 insertions(+), 208 deletions(-)

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.