[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 2/2] argo: correctly report pending message length



On Tue, Jun 11, 2019 at 2:14 PM Nicholas Tsirakis
<niko.tsirakis@xxxxxxxxx> wrote:
>
> When a message is requeue'd in Xen's internal queue, the queue
> entry contains the length of the message so that Xen knows to
> send a VIRQ to the respective domain when enough space frees up
> in the ring. Due to a small bug, however, Xen doesn't populate
> the length of the msg if a given write fails, so this length is
> always reported as zero. This causes Xen to spurriously wake up
> a domain even when the ring doesn't have enough space.
>
> This patch makes sure that the msg len is properly reported by
> populating it in the event of a write failure.
>
> Signed-off-by: Nicholas Tsirakis <tsirakisn@xxxxxxxxxxxx>
> ---
>  xen/common/argo.c | 22 +++++++++++-----------
>  1 file changed, 11 insertions(+), 11 deletions(-)
>
> diff --git a/xen/common/argo.c b/xen/common/argo.c
> index 2f874a570d..31baf4beed 100644
> --- a/xen/common/argo.c
> +++ b/xen/common/argo.c
> @@ -766,26 +766,20 @@ static int
>  ringbuf_insert(const struct domain *d, struct argo_ring_info *ring_info,
>                 const struct argo_ring_id *src_id, xen_argo_iov_t *iovs,
>                 unsigned int niov, uint32_t message_type,
> -               unsigned long *out_len)
> +               unsigned int len)

The above can be reflowed, with the len argument on the same line as
niov and message_type without exceeding the maximum line length.

>  {
>      xen_argo_ring_t ring;
>      struct xen_argo_ring_message_header mh = { };
>      int sp, ret;
> -    unsigned int len = 0;
>      xen_argo_iov_t *piov;
>      XEN_GUEST_HANDLE(uint8) NULL_hnd = { };
>
>      ASSERT(LOCKING_L3(d, ring_info));
>
>      /*
> -     * Obtain the total size of data to transmit -- sets the 'len' variable
> -     * -- and sanity check that the iovs conform to size and number limits.
>       * Enforced below: no more than 'len' bytes of guest data
>       * (plus the message header) will be sent in this operation.
>       */
> -    ret = iov_count(iovs, niov, &len);
> -    if ( ret )
> -        return ret;
>
>      /*
>       * Upper bound check the message len against the ring size.
> @@ -983,8 +977,6 @@ ringbuf_insert(const struct domain *d, struct 
> argo_ring_info *ring_info,
>       * versus performance cost could be added to decide that here.
>       */
>
> -    *out_len = len;
> -
>      return ret;
>  }
>
> @@ -1976,7 +1968,7 @@ sendv(struct domain *src_d, xen_argo_addr_t *src_addr,
>      struct argo_ring_id src_id;
>      struct argo_ring_info *ring_info;
>      int ret = 0;
> -    unsigned long len = 0;
> +    unsigned int len = 0;
>
>      argo_dprintk("sendv: (%u:%x)->(%u:%x) niov:%u type:%x\n",
>                   src_addr->domain_id, src_addr->aport, dst_addr->domain_id,
> @@ -2044,8 +2036,16 @@ sendv(struct domain *src_d, xen_argo_addr_t *src_addr,
>      {
>          spin_lock(&ring_info->L3_lock);
>
> +        /*
> +         * Obtain the total size of data to transmit -- sets the 'len' 
> variable
> +         * -- and sanity check that the iovs conform to size and number 
> limits.
> +         */
> +        ret = iov_count(iovs, niov, &len);
> +        if ( ret )
> +            return ret;

Returning at this point here would be bad as the L3_lock has been
taken above and would not be released.

Perhaps testing for !ret instead, and only if that is true performing
the insert logic up to the existing unlock, would be better.

> +
>          ret = ringbuf_insert(dst_d, ring_info, &src_id, iovs, niov,
> -                             message_type, &len);
> +                             message_type, len);
>          if ( ret == -EAGAIN )
>          {
>              int rc;

Christopher

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.