[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] docs/design: introduce HVMMEM_ioreq_serverX types



On 25/02/16 15:49, Paul Durrant wrote:
> This patch adds a new 'designs' subdirectory under docs as a repository
> for this and future design proposals.
>
> Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
> ---
>
> For convenience this document can also be viewed in PDF at:
>
> http://xenbits.xen.org/people/pauldu/hvmmem_ioreq_server.pdf
> ---
>  docs/designs/hvmmem_ioreq_server.md | 63 
> +++++++++++++++++++++++++++++++++++++
>  1 file changed, 63 insertions(+)
>  create mode 100755 docs/designs/hvmmem_ioreq_server.md

If you name it .markdown, the docs buildsystem will be able to publish
it automatically.  Alternatively, teach the build system about .md.

On the other hand, .pandoc tends to end up making nicer PDFs.

>
> diff --git a/docs/designs/hvmmem_ioreq_server.md 
> b/docs/designs/hvmmem_ioreq_server.md
> new file mode 100755
> index 0000000..47fa715
> --- /dev/null
> +++ b/docs/designs/hvmmem_ioreq_server.md
> @@ -0,0 +1,63 @@
> +HVMMEM\_ioreq\_serverX
> +----------------------
> +
> +Background
> +==========
> +
> +The concept of the IOREQ server was introduced to allow multiple distinct
> +device emulators to a single VM. The XenGT project uses an IOREQ server to
> +provide mediated pass-through of Intel GPUs to guests and, as part of the
> +mediation, needs to intercept accesses to GPU page-tables (or GTTs) that
> +reside in guest RAM.
> +
> +The current implementation of this sets the type of GTT pages to type
> +HVMMEM\_mmio\_write\_dm, which causes Xen to emulate writes to such pages,
> +and then maps the guest physical addresses of those pages to the XenGT

"then sends the guest physical" surely?

> +IOREQ server using the HVMOP\_map\_io\_range\_to\_ioreq\_server hypercall.
> +However, because the number of GTTs is potentially large, using this
> +approach does not scale well.
> +
> +Proposal
> +========
> +
> +Because the number of spare types available in the P2M type-space is
> +currently very limited it is proposed that HVMMEM\_mmio\_write\_dm be
> +replaced by a single new type HVMMEM\_ioreq\_server. In future, if the
> +P2M type-space is increased, this can be renamed to HVMMEM\_ioreq\_server0
> +and new HVMMEM\_ioreq\_server1, HVMMEM\_ioreq\_server2, etc. types
> +can be added.
> +
> +Accesses to a page of type HVMMEM\_ioreq\_serverX should be the same as
> +HVMMEM\_ram\_rw until the type is _claimed_ by an IOREQ server. Furthermore
> +it should only be possible to set the type of a page to
> +HVMMEM\_ioreq\_serverX if that page is currently of type HVMMEM\_ram\_rw.
> +
> +To allow an IOREQ server to claim or release a claim to a type a new pair
> +of hypercalls will be introduced:
> +
> +- HVMOP\_map\_mem\_type\_to\_ioreq\_server
> +- HVMOP\_unmap\_mem\_type\_from\_ioreq\_server
> +
> +and an associated argument structure:
> +
> +             struct hvm_ioreq_mem_type {
> +                     domid_t domid;      /* IN - domain to be serviced */
> +                     ioservid_t id;      /* IN - server id */
> +                     hvmmem_type_t type; /* IN - memory type */
> +                     uint32_t flags;     /* IN - types of access to be
> +                                         intercepted */
> +
> +     #define _HVMOP_IOREQ_MEM_ACCESS_READ 0
> +     #define HVMOP_IOREQ_MEM_ACCESS_READ \
> +             (1 << _HVMOP_IOREQ_MEM_ACCESS_READ)

(1U << ...)

> +
> +     #define _HVMOP_IOREQ_MEM_ACCESS_WRITE 1
> +     #define HVMOP_IOREQ_MEM_ACCESS_WRITE \
> +             (1 << _HVMOP_IOREQ_MEM_ACCESS_WRITE)
> +
> +             };
> +
> +
> +Once the type has been claimed then the requested types of access to any
> +page of the claimed type will be passed to the IOREQ server for handling.
> +Only HVMMEM\_ioreq\_serverX types may be claimed.

LGTM.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.