[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH for-4.11] libs/gnttab: fix FreeBSD gntdev interface



On Thu, Apr 19, 2018 at 09:10:56AM +0100, Wei Liu wrote:
> On Tue, Apr 17, 2018 at 02:03:41PM +0100, Roger Pau Monne wrote:
> > Current interface to the gntdev in FreeBSD is wrong, and mostly worked
> > out of luck before the PTI FreeBSD fixes, when kernel and user-space
> > where sharing the same page tables.
> 
> where -> were?
> 
> > 
> > On FreeBSD ioctls have the size of the passed struct encoded in the ioctl
> > number, because the generic ioctl handler in the OS takes care of
> > copying the data from user-space to kernel space, and then calls the
> > device specific ioctl handler. Thus using ioctl structs with variable
> > sizes is not possible.
> > 
> > The fix is to turn the array of structs at the end of
> > ioctl_gntdev_alloc_gref and ioctl_gntdev_map_grant_ref into pointers,
> > that can be properly accessed from the kernel gntdev driver using the
> > copyin/copyout functions. Note that this is exactly how it's done for
> > the privcmd driver.
> > 
> > Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> 
> Not sure I follow. Isn't turning the array into pointer still results in
> a variable length array?

But it won't be a flexible array member, which is what causes the
issue, it will be an independent pointer. Doing something like:

copyin(kernel_space, user_space, sizeof(struct ioctl));
copyin(krefs_array, kernel_space->refs, sizeof(...));

Will work properly.

The problem with the current layout is that the first copyin is
automatically performed by the ioctl generic system handler, and thus
the sizeof will be wrong because it will use the layout of the struct
as defined in the header, that has only one element in the array.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.