[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH for-4.14] mm: fix public declaration of struct xen_mem_acquire_resource



On Fri, Jun 26, 2020 at 04:19:36PM +0200, Jan Beulich wrote:
> On 26.06.2020 15:40, Jan Beulich wrote:
> > On 25.06.2020 18:10, Roger Pau Monné wrote:
> >> On Thu, Jun 25, 2020 at 11:05:52AM +0200, Roger Pau Monné wrote:
> >>> On Wed, Jun 24, 2020 at 04:01:44PM +0200, Jan Beulich wrote:
> >>>> On 24.06.2020 15:41, Julien Grall wrote:
> >>>>> On 24/06/2020 11:12, Jan Beulich wrote:
> >>>>>> On 23.06.2020 19:26, Roger Pau Monné wrote:
> >>>>>>> I'm confused. Couldn't we switch from uint64_aligned_t to plain
> >>>>>>> uint64_t (like it's currently on the Linux headers), and then use the
> >>>>>>> compat layer in Xen to handle the size difference when called from
> >>>>>>> 32bit environments?
> >>>>>>
> >>>>>> And which size would we use there? The old or the new one (breaking
> >>>>>> future or existing callers respectively)? Meanwhile I think that if
> >>>>>> this indeed needs to not be tools-only (which I still question),
> >>>>>
> >>>>> I think we now agreed on a subthread that the kernel needs to know the 
> >>>>> layout of the hypercall.
> >>>>>
> >>>>>> then our only possible route is to add explicit padding for the
> >>>>>> 32-bit case alongside the change you're already making.
> >>>>>
> >>>>> AFAICT Linux 32-bit doesn't have this padding. So wouldn't it make 
> >>>>> incompatible the two incompatible?
> >>>>
> >>>> In principle yes. But they're putting the structure instance on the
> >>>> stack, so there's not risk from Xen reading 4 bytes too many. I'd
> >>>> prefer keeping the interface as is (i.e. with the previously
> >>>> implicit padding made explicit) to avoid risking to break other
> >>>> possible callers. But that's just my view on it, anyway ...
> >>>
> >>> Adding the padding is cleaner because we don't need any compat stuff
> >>> in order to access the structure from the caller, and we also keep the
> >>> original layout currently present on Xen headers.
> >>>
> >>> I can prepare a fix for the Linux kernel, if this approach is fine.
> >>
> >> So I went over this, and I'm not sure the point of adding the padding
> >> field at the end of the structure for 32bit x86.
> >>
> >> The current situation is the following:
> >>
> >>  - Linux will use a struct on 32bit x86 that doesn't have the 4byte
> >>    padding at the end.
> >>  - Xen will copy 4bytes of garbage in that case, since the struct on
> >>    Linux is allocated on the stack.
> >>
> >> So I suggest we take the approach found on this patch, that is remove
> >> the 8byte alignment from the frame field, which will in turn remove
> >> 4bytes of padding from the tail of the structure on 32bit x86.
> >>
> >> That would leave the following scenario:
> >>
> >>  - The struct layout in Linux headers would be correct.
> >>  - Xen already handles the struct size difference on x86 32bit vs
> >>    64bit, as the compat layer is currently doing the copy in
> >>    compat_memory_op taking into account the size of the compat
> >>    structure.
> > 
> > Hmm, I didn't even notice this until now - it looks to do so
> > indeed, but apparently because of a bug: The original
> > uint64_aligned_t gets translated to mere uint64_t in the
> > compat header, whereas it should have been retained. This
> > means that my concern of ...
> > 
> >>  - Removing the padding will work for all use cases: Linux will
> >>    already be using the correct layout on x86 32bits, so no change
> >>    will be required there. Any consumers using the tail padded
> >>    structure will continue to work without issues, as Xen simply won't
> >>    copy the tailing 4bytes.
> > 
> > ... code using the new definition then potentially not working
> > correctly on  4.13, at least on versions not having this
> > backported, was not actually true.
> > 
> > I'll try to sort out this other bug then ...
> 
> I was wrong, there is no bug - translating uint64_aligned_t to
> uint64_t is fine, as these are seen only by 64-bit code, where
> both are identical anyway. Hence there still is the concern that
> code working fine on the supposed 4.14 might then not work on
> unfixed 4.13, due to 4.13 copying 4 extra bytes.

So here are the structures on 64bit x86 according to pahole against
xen-syms:

struct xen_mem_acquire_resource {
        domid_t                    domid;                /*     0     2 */
        uint16_t                   type;                 /*     2     2 */
        uint32_t                   id;                   /*     4     4 */
        uint32_t                   nr_frames;            /*     8     4 */
        uint32_t                   pad;                  /*    12     4 */
        uint64_t                   frame;                /*    16     8 */
        __guest_handle_xen_pfn_t   frame_list;           /*    24     8 */

        /* size: 32, cachelines: 1, members: 7 */
        /* last cacheline: 32 bytes */
};

struct compat_mem_acquire_resource {
        domid_compat_t             domid;                /*     0     2 */
        uint16_t                   type;                 /*     2     2 */
        uint32_t                   id;                   /*     4     4 */
        uint32_t                   nr_frames;            /*     8     4 */
        uint32_t                   pad;                  /*    12     4 */
        uint64_t                   frame;                /*    16     8 */
        __compat_handle_compat_pfn_t frame_list;         /*    24     4 */

        /* size: 28, cachelines: 1, members: 7 */
        /* last cacheline: 28 bytes */
};

There's no tailing padding on the compat struct ATM, and hence the
current code will behave correctly when used against a compat
structure without the tailing padding (as it's already ignored).

There's a #pragma pack(4) at the top of compat/memory.h which forces
this AFAICT. So I think the suggested approach is fine and will avoid
any breakage.

Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.