[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/3] Introduce the Xen 9pfs transport header



On Fri, 17 Mar 2017, Konrad Rzeszutek Wilk wrote:
> On Tue, Mar 14, 2017 at 03:18:35PM -0700, Stefano Stabellini wrote:
> > Define the ring according to the protocol specification, using the new
> > DEFINE_XEN_FLEX_RING_AND_INTF macro.
> > 
> 
> There is a bit of 9pfs code being posted. Is this patch still
> up-to-date with that? I am going to assume yes, in which case
> see below

Yes, it is. The only change so far is that QEMU requested to introduce a
common struct instead of xen_9pfs_header, as it is used by virtio too.
It makes sense. And it makes sense for the Xen header to keep the
definition of struct xen_9pfs_header, I think (unless
__attribute__((packed)) creates too much trouble).

 
> > Signed-off-by: Stefano Stabellini <stefano@xxxxxxxxxxx>
> > CC: konrad.wilk@xxxxxxxxxx
> > ---
> >  xen/include/public/io/9pfs.h | 42 
> > ++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 42 insertions(+)
> >  create mode 100644 xen/include/public/io/9pfs.h
> > 
> > diff --git a/xen/include/public/io/9pfs.h b/xen/include/public/io/9pfs.h
> > new file mode 100644
> > index 0000000..b38ee66
> > --- /dev/null
> > +++ b/xen/include/public/io/9pfs.h
> > @@ -0,0 +1,42 @@
> > +/*
> > + * 9pfs.h -- Xen 9PFS transport
> > + *
> > + * Refer to docs/misc/9pfs.markdown for the specification
> > + *
> > + * Permission is hereby granted, free of charge, to any person obtaining a 
> > copy
> > + * of this software and associated documentation files (the "Software"), to
> > + * deal in the Software without restriction, including without limitation 
> > the
> > + * rights to use, copy, modify, merge, publish, distribute, sublicense, 
> > and/or
> > + * sell copies of the Software, and to permit persons to whom the Software 
> > is
> > + * furnished to do so, subject to the following conditions:
> > + *
> > + * The above copyright notice and this permission notice shall be included 
> > in
> > + * all copies or substantial portions of the Software.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS 
> > OR
> > + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> > + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 
> > THE
> > + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> > + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> > + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
> > + * DEALINGS IN THE SOFTWARE.
> > + *
> > + * Copyright (C) 2017 Stefano Stabellini <stefano@xxxxxxxxxxx>
> > + */
> > +
> > +#ifndef __XEN_PUBLIC_IO_9PFS_H__
> > +#define __XEN_PUBLIC_IO_9PFS_H__
> > +
> > +#include "ring.h"
> > +
> > +struct xen_9pfs_header {
> > +   uint32_t size;
> > +   uint8_t id;
> > +   uint16_t tag;
> > +} __attribute__((packed));
> 
> I think the Xen headers are not to have __packed__ on the them.
> Perhaps you can make this work by using #pragma?

Something like:

#pragma pack(push)
#pragma pack(1)
  struct xen_9pfs_header {
        uint32_t size;
        uint8_t id;
        uint16_t tag;
  };
#pragma pack(pop)

?
Ugly, but I can do that.


> > +
> > +#define XEN_9PFS_RING_ORDER 6
> 
> 
> Perhaps a bit details? The specs mentions the max and min but .. this
> 6 value is rather arbitrary?

Yes, it is arbitrary (although it is chosen based on benchmarks and
common sense). I introduced it for simplicity so that we don't actually
have to handle rings of multiple sizes in the backend. I'll remove it
and make the order dynamic, it's not difficult actually.


> > +#define XEN_9PFS_RING_SIZE  XEN_FLEX_RING_SIZE(XEN_9PFS_RING_ORDER)
> > +DEFINE_XEN_FLEX_RING_AND_INTF(xen_9pfs);
> > +
> > +#endif
> 
> Pls add the bottom of the patch the editor configuration block.

OK

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.