[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 1/8] public / x86: Introduce __HYPERCALL_dm_op...



On Tue, Jan 17, 2017 at 05:29:49PM +0000, Paul Durrant wrote:
> ...as a set of hypercalls to be used by a device model.
> 
> As stated in the new docs/designs/dm_op.markdown:
> 
> "The aim of DMOP is to prevent a compromised device model from
> compromising domains other then the one it is associated with. (And is
> therefore likely already compromised)."
> 
> See that file for further information.
> 
> This patch simply adds the boilerplate for the hypercall.
> 
> Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
> Suggested-by: Ian Jackson <ian.jackson@xxxxxxxxxx>
> Suggested-by: Jennifer Herbert <jennifer.herbert@xxxxxxxxxx>
> ---
> Cc: Ian Jackson <ian.jackson@xxxxxxxxxx>
> Cc: Jennifer Herbert <jennifer.herbert@xxxxxxxxxx>
> Cc: Daniel De Graaf <dgdegra@xxxxxxxxxxxxx>
> Cc: Wei Liu <wei.liu2@xxxxxxxxxx>
> Cc: Jan Beulich <jbeulich@xxxxxxxx>
> Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> 
> v4:
> - Change XEN_GUEST_HANDLE_64 to XEN_GUEST_HANDLE in struct xen_dm_op_buf
>   and add the necessary compat code. Drop Jan's R-b since the patch has
>   been fundamentally modified.
> 
> v3:
> - Re-written large portions of dmop.markdown to remove references to
>   previous proposals and make it a standalone design doc.
> 
> v2:
> - Addressed several comments from Jan.
> - Removed modification of __XEN_LATEST_INTERFACE_VERSION__ as it is not
>   needed in this patch.
> ---
>  docs/designs/dmop.markdown        | 165 
> ++++++++++++++++++++++++++++++++++++++
>  tools/flask/policy/modules/xen.if |   2 +-
>  tools/libxc/include/xenctrl.h     |   1 +
>  tools/libxc/xc_private.c          |  70 ++++++++++++++++
>  tools/libxc/xc_private.h          |   2 +
>  xen/arch/x86/hvm/Makefile         |   1 +
>  xen/arch/x86/hvm/dm.c             | 149 ++++++++++++++++++++++++++++++++++
>  xen/arch/x86/hvm/hvm.c            |   1 +
>  xen/arch/x86/hypercall.c          |   2 +
>  xen/include/Makefile              |   1 +
>  xen/include/public/hvm/dm_op.h    |  71 ++++++++++++++++
>  xen/include/public/xen.h          |   1 +
>  xen/include/xen/hypercall.h       |  15 ++++
>  xen/include/xlat.lst              |   1 +
>  xen/include/xsm/dummy.h           |   6 ++
>  xen/include/xsm/xsm.h             |   6 ++
>  xen/xsm/flask/hooks.c             |   7 ++
>  17 files changed, 500 insertions(+), 1 deletion(-)
>  create mode 100644 docs/designs/dmop.markdown
>  create mode 100644 xen/arch/x86/hvm/dm.c
>  create mode 100644 xen/include/public/hvm/dm_op.h
> 
> diff --git a/docs/designs/dmop.markdown b/docs/designs/dmop.markdown
> new file mode 100644
> index 0000000..9f2f0d4
> --- /dev/null
> +++ b/docs/designs/dmop.markdown
> @@ -0,0 +1,165 @@
> +DMOP
> +====
> +
> +Introduction
> +------------
> +
> +The aim of DMOP is to prevent a compromised device model from compromising
> +domains other then the one it is associated with. (And is therefore likely
> +already compromised).
> +
> +The problem occurs when you a device model issues an hypercall that
> +includes references to user memory other than the operation structure
> +itself, such as with Track dirty VRAM (as used in VGA emulation).
> +Is this case, the address of this other user memory needs to be vetted,
> +to ensure it is not within restricted address ranges, such as kernel
> +memory. The real problem comes down to how you would vet this address -
> +the idea place to do this is within the privcmd driver, without privcmd
> +having to have specific knowledge of the hypercall's semantics.
> +
> +The Design
> +----------
> +
> +The privcmd driver implements a new restriction ioctl, which takes a domid
> +parameter.  After that restriction ioctl is issued, the privcmd driver will
> +permit only DMOP hypercalls, and only with the specified target domid.
> +
> +A DMOP hypercall consists of an array of buffers and lengths, with the
> +first one containing the specific DMOP parameters. These can then reference
> +further buffers from within in the array. Since the only user buffers
> +passed are that found with that array, they can all can be audited by
> +privcmd.
> +
> +The following code illustrates this idea:
> +
> +struct xen_dm_op {
> +    uint32_t op;
> +};
> +
> +struct xen_dm_op_buf {
> +    XEN_GUEST_HANDLE(void) h;
> +    unsigned long size;
> +};
> +typedef struct xen_dm_op_buf xen_dm_op_buf_t;
> +
> +enum neg_errnoval
> +HYPERVISOR_dm_op(domid_t domid,
> +                 xen_dm_op_buf_t bufs[],
> +                 unsigned int nr_bufs)
> +
> +@domid is the domain the hypercall operates on.
> +@bufs points to an array of buffers where @bufs[0] contains a struct
> +dm_op, describing the specific device model operation and its parameters.
> +@bufs[1..] may be referenced in the parameters for the purposes of
> +passing extra information to or from the domain.
> +@nr_bufs is the number of buffers in the @bufs array.
> +
> +It is forbidden for the above struct (xen_dm_op) to contain any guest
> +handles. If they are needed, they should instead be in
> +HYPERVISOR_dm_op->bufs.
> +
> +Validation by privcmd driver
> +----------------------------
> +
> +If the privcmd driver has been restricted to specific domain (using a
> + new ioctl), when it received an op, it will:
> +
> +1. Check hypercall is DMOP.
> +
> +2. Check domid == restricted domid.
> +
> +3. For each @nr_bufs in @bufs: Check @h and @size give a buffer
> +   wholly in the user space part of the virtual address space. (e.g.
> +   Linux will use access_ok()).
> +
> +
> +Xen Implementation
> +------------------
> +
> +Since a DMOP buffers need to be copied from or to the guest, functions for
> +doing this would be written as below.  Note that care is taken to prevent
> +damage from buffer under- or over-run situations.  If the DMOP is called
> +with incorrectly sized buffers, zeros will be read, while extra is ignored.
> +
> +static bool copy_buf_from_guest(xen_dm_op_buf_t bufs[],
> +                                unsigned int nr_bufs, void *dst,
> +                                unsigned int idx, size_t dst_size)
> +{
> +    size_t size = min_t(size_t, dst_size, bufs[idx].size);
> +
> +    return !copy_from_guest(dst, bufs[idx].h, size);
> +}
> +
> +static bool copy_buf_to_guest(xen_dm_op_buf_t bufs[],
> +                              unsigned int nr_bufs, unsigned int idx,
> +                              void *src, size_t src_size)
> +{
> +    size_t size = min_t(size_t, bufs[idx].size, src_size);
> +
> +    return !copy_to_guest(bufs[idx].h, src, size);
> +}
> +
> +This leaves do_dm_op easy to implement as below:
> +
> +static int dm_op(domid_t domid,
> +                 unsigned int nr_bufs,
> +                 xen_dm_op_buf_t bufs[])
> +{
> +    struct domain *d;
> +    struct xen_dm_op op;
> +    long rc;
> +
> +    rc = rcu_lock_remote_domain_by_id(domid, &d);
> +    if ( rc )
> +        return rc;
> +
> +    if ( !has_hvm_container_domain(d) )
> +        goto out;
> +
> +    rc = xsm_dm_op(XSM_DM_PRIV, d);
> +    if ( rc )
> +        goto out;
> +
> +    if ( !copy_buf_from_guest(bufs, nr_bufs, &op, 0, sizeof(op)) )
> +    {
> +        rc = -EFAULT;
> +        goto out;
> +    }
> +
> +    switch ( op.op )
> +    {
> +    default:
> +        rc = -EOPNOTSUPP;
> +        break;
> +    }
> +
> +    if ( !rc &&
> +         !copy_buf_to_guest(bufs, nr_bufs, 0, &op, sizeof(op)) )
> +        rc = -EFAULT;
> +
> + out:
> +    rcu_unlock_domain(d);
> +
> +    return rc;
> +}
> +
> +long do_dm_op(domid_t domid,
> +              unsigned int nr_bufs,
> +              XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs)
> +{
> +    struct xen_dm_op_buf *nat;
> +    int rc;
> +
> +    nat = xzalloc_array(struct xen_dm_op_buf, nr_bufs);
> +    if ( !nat )
> +        return -ENOMEM;
> +
> +    if ( !copy_from_guest_offset(nat, bufs, 0, nr_bufs) )
> +        return -EFAULT;
> +
> +    rc = dm_op(domid, nr_bufs, nat);
> +
> +    xfree(nat);
> +
> +    return rc;
> +}
> diff --git a/tools/flask/policy/modules/xen.if 
> b/tools/flask/policy/modules/xen.if
> index 1aca75d..f9254c2 100644
> --- a/tools/flask/policy/modules/xen.if
> +++ b/tools/flask/policy/modules/xen.if
> @@ -151,7 +151,7 @@ define(`device_model', `
>  
>       allow $1 $2_target:domain { getdomaininfo shutdown };
>       allow $1 $2_target:mmu { map_read map_write adjust physmap target_hack 
> };
> -     allow $1 $2_target:hvm { getparam setparam trackdirtyvram hvmctl 
> irqlevel pciroute pcilevel cacheattr send_irq };
> +     allow $1 $2_target:hvm { getparam setparam trackdirtyvram hvmctl 
> irqlevel pciroute pcilevel cacheattr send_irq dm };
>  ')
>  
>  # make_device_model(priv, dm_dom, hvm_dom)
> diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
> index 4ab0f57..2ba46d7 100644
> --- a/tools/libxc/include/xenctrl.h
> +++ b/tools/libxc/include/xenctrl.h
> @@ -41,6 +41,7 @@
>  #include <xen/sched.h>
>  #include <xen/memory.h>
>  #include <xen/grant_table.h>
> +#include <xen/hvm/dm_op.h>
>  #include <xen/hvm/params.h>
>  #include <xen/xsm/flask_op.h>
>  #include <xen/tmem.h>
> diff --git a/tools/libxc/xc_private.c b/tools/libxc/xc_private.c
> index d57c39a..8e49635 100644
> --- a/tools/libxc/xc_private.c
> +++ b/tools/libxc/xc_private.c
> @@ -776,6 +776,76 @@ int xc_ffs64(uint64_t x)
>      return l ? xc_ffs32(l) : h ? xc_ffs32(h) + 32 : 0;
>  }
>  
> +int do_dm_op(xc_interface *xch, domid_t domid, unsigned int nr_bufs, ...)
> +{
> +    int ret = -1;
> +    struct  {
> +        void *u;
> +        void *h;
> +    } *bounce;
> +    DECLARE_HYPERCALL_BUFFER(xen_dm_op_buf_t, bufs);
> +    va_list args;
> +    unsigned int idx;
> +
> +    bounce = calloc(nr_bufs, sizeof(*bounce));
> +    if ( bounce == NULL )
> +        goto fail1;
> +
> +    bufs = xc_hypercall_buffer_alloc(xch, bufs, sizeof(*bufs) * nr_bufs);
> +    if ( bufs == NULL )
> +        goto fail2;
> +
> +    va_start(args, nr_bufs);
> +    for (idx = 0; idx < nr_bufs; idx++)

Coding style.

> +
> +int compat_dm_op(domid_t domid,
> +                 unsigned int nr_bufs,
> +                 COMPAT_HANDLE_PARAM(compat_dm_op_buf_t) bufs)
> +{
> +    struct xen_dm_op_buf *nat;
> +    unsigned int i;
> +    int rc = -EFAULT;
> +
> +    nat = xzalloc_array(struct xen_dm_op_buf, nr_bufs);
> +    if ( !nat )
> +        return -ENOMEM;
> +
> +    for ( i = 0; i < nr_bufs; i++ )
> +    {
> +        struct compat_dm_op_buf cmp;
> +
> +        if ( copy_from_compat_offset(&cmp, bufs, i, 1) )
> +            goto out;
> +
> +#define XLAT_dm_op_buf_HNDL_h(_d_, _s_) \
> +        guest_from_compat_handle((_d_)->h, (_s_)->h)
> +
> +        XLAT_dm_op_buf(&nat[i], &cmp);
> +
> +#undef XLAT_dm_op_buf_HNDL_h
> +    }
> +
> +    rc = dm_op(domid, nr_bufs, nat);
> +

Need to copy back to the original places with copy_to_compat?

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.