[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 1/8] public / x86: Introduce __HYPERCALL_dm_op...



>>> On 17.01.17 at 18:29, <paul.durrant@xxxxxxxxxx> wrote:
> +static bool copy_buf_from_guest(xen_dm_op_buf_t bufs[],
> +                                unsigned int nr_bufs, void *dst,
> +                                unsigned int idx, size_t dst_size)
> +{
> +    size_t size = min_t(size_t, dst_size, bufs[idx].size);
> +
> +    return !copy_from_guest(dst, bufs[idx].h, size);
> +}
> +
> +static bool copy_buf_to_guest(xen_dm_op_buf_t bufs[],
> +                              unsigned int nr_bufs, unsigned int idx,
> +                              void *src, size_t src_size)
> +{
> +    size_t size = min_t(size_t, bufs[idx].size, src_size);
> +
> +    return !copy_to_guest(bufs[idx].h, src, size);
> +}

Wouldn't it be better to require an exact input size here? The guest
providing a different amount is likely to indicate some version
mismatch, build issue, or what not.

> +#ifndef __XEN_PUBLIC_HVM_DM_OP_H__
> +#define __XEN_PUBLIC_HVM_DM_OP_H__
> +
> +#if defined(__XEN__) || defined(__XEN_TOOLS__)
> +
> +#include "../xen.h"
> +
> +#define XEN_DMOP_invalid 0

Do we actually need this, btw?

> +struct xen_dm_op {
> +    uint32_t op;
> +};
> +
> +struct xen_dm_op_buf {
> +    XEN_GUEST_HANDLE(void) h;
> +    unsigned long size;

xen_ulong_t?

> --- a/xen/include/xlat.lst
> +++ b/xen/include/xlat.lst
> @@ -129,3 +129,4 @@
>  ?    flask_setenforce                xsm/flask_op.h
>  !    flask_sid_context               xsm/flask_op.h
>  ?    flask_transition                xsm/flask_op.h
> +!    dm_op_buf                       hvm/dm_op.h

Please don't break the (mostly) sorted sequence here (sorting is
done by header name first, for - I hope - obvious reasons).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.