[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH v3 0/3] x86/ioreq server: Introduce HVMMEM_ioreq_server mem type.



XenGT leverages ioreq server to track and forward the accesses to GPU 
I/O resources, e.g. the PPGTT(per-process graphic translation tables).
Currently, ioreq server uses rangeset to track the BDF/ PIO/MMIO ranges
to be emulated. To select an ioreq server, the rangeset is searched to
see if the I/O range is recorded. However, number of ram pages to be
tracked may exceed the upper limit of rangeset.

Previously, one solution was proposed to refactor the rangeset, and 
extend its upper limit. However, after 12 rounds discussion, we have
decided to drop this approach due to security concerns. Now this new 
patch series introduces a new mem type, HVMMEM_ioreq_server, and added
hvm operations to let one ioreq server to claim its ownership of ram 
pages with this type. Accesses to a page of this type will be handled
by the specified ioreq server directly. 

Yu Zhang (3):
  x86/ioreq server(patch for 4.7): Rename p2m_mmio_write_dm to
    p2m_ioreq_server.
  x86/ioreq server: Add new functions to get/set memory types.
  x86/ioreq server: Add HVMOP to map guest ram with p2m_ioreq_server to
    an ioreq server.

 xen/arch/x86/hvm/emulate.c       |  32 +++-
 xen/arch/x86/hvm/hvm.c           | 347 +++++++++++++++++++++++++--------------
 xen/arch/x86/hvm/ioreq.c         |  41 +++++
 xen/arch/x86/mm/hap/nested_hap.c |   2 +-
 xen/arch/x86/mm/p2m-ept.c        |   7 +-
 xen/arch/x86/mm/p2m-pt.c         |  23 ++-
 xen/arch/x86/mm/p2m.c            |  70 ++++++++
 xen/arch/x86/mm/shadow/multi.c   |   3 +-
 xen/include/asm-x86/hvm/ioreq.h  |   2 +
 xen/include/asm-x86/p2m.h        |  32 +++-
 xen/include/public/hvm/hvm_op.h  |  39 ++++-
 11 files changed, 454 insertions(+), 144 deletions(-)

-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.