|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 16/25] argo: implement the notify op
On Thu, Dec 13, 2018 at 6:06 AM Jan Beulich <JBeulich@xxxxxxxx> wrote:
>
> >>> On 01.12.18 at 02:32, <christopher.w.clark@xxxxxxxxx> wrote:
> > +static uint32_t
> > +argo_ringbuf_payload_space(struct domain *d, struct argo_ring_info
> > *ring_info)
> > +{
> > + argo_ring_t ring;
> > + int32_t ret;
> > +
> > + ASSERT(spin_is_locked(&ring_info->lock));
> > +
> > + ring.len = ring_info->len;
> > + if ( !ring.len )
> > + return 0;
> > +
> > + ring.tx_ptr = ring_info->tx_ptr;
> > +
> > + if ( argo_ringbuf_get_rx_ptr(ring_info, &ring.rx_ptr) )
> > + return 0;
> > +
> > + argo_dprintk("argo_ringbuf_payload_space: tx_ptr=%d rx_ptr=%d\n",
> > + ring.tx_ptr, ring.rx_ptr);
> > +
> > + if ( ring.rx_ptr == ring.tx_ptr )
> > + return ring.len - sizeof(struct argo_ring_message_header);
> > +
> > + ret = ring.rx_ptr - ring.tx_ptr;
> > + if ( ret < 0 )
> > + ret += ring.len;
>
> Seeing these two if()-s - how is an empty ring distinguished from
> a completely full one? I'm getting the impression that
> ring.rx_ptr == ring.tx_ptr in both cases.
The subtraction from ring.len above is missing an additional subtraction of
ARGO_ROUNDUP(1), which doesn't help reasoning about this. (Fixed in v2.)
If rx_ptr == tx_ptr, then the ring is empty. The ring insertion
functions won't allow filling the ring, and I've added more comments
in the v2 code to explain.
> > + ret -= sizeof(struct argo_ring_message_header);
> > + ret -= ARGO_ROUNDUP(1);
>
> Wouldn't you instead better round ret to a suitable multiple of
> whatever granularity you try to arrange for here? Otherwise
> what is this extra subtraction supposed to do?
re: subtraction, have added new comment:
/*
* The maximum size payload for a message that will be accepted is:
* (the available space between the ring indexes)
* minus (space for a message header)
* minus (space for one message slot)
* since argo_ringbuf_insert requires that one message slot be left
* unfilled, to avoid filling the ring to capacity and confusing a full
* ring with an empty one.
*/
re: rounding: Possibly. Not sure. In practice, both sides are
updating the indexes in quantized steps matching the
ARGO_ROUNDUP unit. Not sure it needs to change.
>
> > @@ -627,6 +679,43 @@ argo_pending_remove_all(struct argo_ring_info
> > *ring_info)
> > }
> > }
> >
> > +static void
> > +argo_pending_notify(struct hlist_head *to_notify)
> > +{
> > + struct hlist_node *node, *next;
> > + struct argo_pending_ent *pending_ent;
> > +
> > + ASSERT(rw_is_locked(&argo_lock));
> > +
> > + hlist_for_each_entry_safe(pending_ent, node, next, to_notify, node)
> > + {
> > + hlist_del(&pending_ent->node);
> > + argo_signal_domid(pending_ent->id);
> > + xfree(pending_ent);
> > + }
> > +}
> > +
> > +static void
> > +argo_pending_find(const struct domain *d, struct argo_ring_info *ring_info,
> > + uint32_t payload_space, struct hlist_head *to_notify)
> > +{
> > + struct hlist_node *node, *next;
> > + struct argo_pending_ent *ent;
> > +
> > + ASSERT(rw_is_locked(&d->argo->lock));
> > +
> > + spin_lock(&ring_info->lock);
> > + hlist_for_each_entry_safe(ent, node, next, &ring_info->pending, node)
> > + {
> > + if ( payload_space >= ent->len )
> > + {
> > + hlist_del(&ent->node);
> > + hlist_add_head(&ent->node, to_notify);
> > + }
> > + }
>
> So if there's space available to fit e.g. just the first pending entry,
> you'd continue the loop and also signal all others, provided their
> lengths aren't too big? What good does producing such a burst of
> notifications do, when only one of the interested parties is
> actually going to be able to put something on the ring?
Added new comment:
/*
* TODO: Current policy here is to signal _all_ of the waiting domains
* interested in sending a message of size less than payload_space.
*
* This is likely to be suboptimal, since once one of them has added
* their message to the ring, there may well be insufficient room
* available for any of the others to transmit, meaning that they were
* woken in vain, which created extra work just to requeue their wait.
*
* Retain this simple policy for now since it at least avoids starving a
* domain of available space notifications because of a policy that only
* notified other domains instead. Improvement may be possible;
* investigation required.
*/
> > +typedef struct argo_ring_data
> > +{
> > + uint64_t magic;
>
> What is this good for?
New comment added:
/*
* Contents of the 'magic' field are inspected to verify that they contain
* an expected value before the hypervisor will perform writes into this
* structure in guest-supplied memory.
*/
>
> > @@ -179,6 +214,33 @@ struct argo_ring_message_header
> > */
> > #define ARGO_MESSAGE_OP_sendv 5
> >
> > +/*
> > + * ARGO_MESSAGE_OP_notify
> > + *
> > + * Asks Xen for information about other rings in the system.
> > + *
> > + * ent->ring is the argo_addr_t of the ring you want information on.
> > + * Uses the same ring matching rules as ARGO_MESSAGE_OP_sendv.
> > + *
> > + * ent->space_required : if this field is not null then Xen will check
> > + * that there is space in the destination ring for this many bytes of
> > payload.
> > + * If sufficient space is available, it will set
> > ARGO_RING_DATA_F_SUFFICIENT
> > + * and CANCEL any pending notification for that ent->ring; otherwise it
> > + * will schedule a notification event and the flag will not be set.
> > + *
> > + * These flags are set by Xen when notify replies:
> > + * ARGO_RING_DATA_F_EMPTY ring is empty
> > + * ARGO_RING_DATA_F_PENDING notify event is pending - * don't rely on
> > this *
> > + * ARGO_RING_DATA_F_SUFFICIENT sufficient space for space_required is
> > there
> > + * ARGO_RING_DATA_F_EXISTS ring exists
> > + *
> > + * arg1: XEN_GUEST_HANDLE(argo_ring_data_t) ring_data (may be NULL)
> > + * arg2: NULL
> > + * arg3: 0 (ZERO)
> > + * arg4: 0 (ZERO)
>
> Another observation I probably should have made earlier: You
> don't check that the NULL/ZERO specified argument are indeed
> so. Just like for padding fields, please do.
ack, thanks.
Christopher
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |