[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 13/25] argo: implement the register op



On Wed, Dec 12, 2018 at 1:48 AM Jan Beulich <JBeulich@xxxxxxxx> wrote:
>
> > +static int
> > +argo_find_ring_mfns(struct domain *d, struct argo_ring_info *ring_info,
> > +                    uint32_t npage, XEN_GUEST_HANDLE_PARAM(argo_pfn_t) 
> > pfn_hnd,
> > +                    uint32_t len)
> > +{
> > +    int i;
> > +    int ret = 0;
> > +
> > +    if ( (npage << PAGE_SHIFT) < len )
> > +        return -EINVAL;
> > +
> > +    if ( ring_info->mfns )
> > +    {
> > +        /*
> > +         * Ring already existed. Check if it's the same ring,
> > +         * i.e. same number of pages and all translated gpfns still
> > +         * translating to the same mfns
> > +         */
>
> This comment makes me wonder whether the translations are
> permitted to change at other times. If so I'm not sure what
> value verification here has. If not, this probably would want to
> be debugging-only code.

My understanding is that the gfn->mfn translation is not necessarily stable
across entry and exit from host power state S4, suspend to disk.

I've added extra explanation to the next version of the patch series in
the commit message for the suspend and resume functions which trigger
guest use of this logic.

> > +static struct argo_ring_info *
> > +argo_ring_find_info(const struct domain *d, const struct argo_ring_id *id)
> > +{
> > +    uint16_t hash;
> > +    struct hlist_node *node;
>
> const?

I couldn't determine exactly what you were pointing towards with this one.
I've applied 'const' in a lot further place in the next version; please
let me know if I've missed where you intended.

> > +    uint64_t dst_domain_cookie = 0;
> > +
> > +    if ( !(guest_handle_is_aligned(ring_hnd, ~PAGE_MASK)) )
> > +        return -EINVAL;
>
> Why? You don't store the handle for later use (and you shouldn't).
> If there really is a need for a full page's worth of memory, it
> would better be passed in as GFN.

I've added this comment for this behaviour in v2:

+    /*
+     * Verify the alignment of the ring data structure supplied with the
+     * understanding that the ring handle supplied points to the same memory as
+     * the first entry in the array of pages provided via pg_descr_hnd, where
+     * the head of the ring will reside.
+     * See argo_update_tx_ptr where the location of the tx_ptr is accessed at a
+     * fixed offset from head of the first page in the mfn array.
+     */

> > @@ -253,6 +723,34 @@ do_argo_message_op(int cmd, 
> > XEN_GUEST_HANDLE_PARAM(void) arg1,
> >
> >      switch (cmd)
> >      {
> > +    case ARGO_MESSAGE_OP_register_ring:
> > +    {
> > +        XEN_GUEST_HANDLE_PARAM(argo_ring_t) ring_hnd =
> > +            guest_handle_cast(arg1, argo_ring_t);
> > +        XEN_GUEST_HANDLE_PARAM(argo_pfn_t) pfn_hnd =
> > +            guest_handle_cast(arg2, argo_pfn_t);
> > +        uint32_t npage = arg3;
> > +        bool fail_exist = arg4 & ARGO_REGISTER_FLAG_FAIL_EXIST;
> > +
> > +        if ( unlikely(!guest_handle_okay(ring_hnd, 1)) )
> > +            break;
>
> I don't understand the need for this and ...
>
> > +        if ( unlikely(npage > (ARGO_MAX_RING_SIZE >> PAGE_SHIFT)) )
> > +        {
> > +            rc = -EINVAL;
> > +            break;
> > +        }
> > +        if ( unlikely(!guest_handle_okay(pfn_hnd, npage)) )
> > +            break;
>
> ... perhaps also this, when you use copy_from_guest() upon access.

This is the one piece of feedback on version 1 of this series that I haven't
taken the time to address yet. The code is evidently safe, with only a possible
performance decrease a concern, so I'd like to study it further before removing
any of the checks rather than delay posting version two of this series.

All the other points: ack, and should be ok in v2.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.