[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH net-next v7 8/9] xen-netback: Timeout packets in RX path



On Thu, 2014-03-06 at 21:48 +0000, Zoltan Kiss wrote:
> @@ -557,12 +577,25 @@ void xenvif_disconnect(struct xenvif *vif)
>  void xenvif_free(struct xenvif *vif)
>  {
>       int i, unmap_timeout = 0;
> +     /* Here we want to avoid timeout messages if an skb can be legitimatly

"legitimately"

> +      * stucked somewhere else. Realisticly this could be an another vif's

"stuck" and "Realistically"

> +      * internal or QDisc queue. That another vif also has this
> +      * rx_drain_timeout_msecs timeout, but the timer only ditches the
> +      * internal queue. After that, the QDisc queue can put in worst case
> +      * XEN_NETIF_RX_RING_SIZE / MAX_SKB_FRAGS skbs into that another vif's
> +      * internal queue, so we need several rounds of such timeouts until we
> +      * can be sure that no another vif should have skb's from us. We are
> +      * not sending more skb's, so newly stucked packets are not interesting

"stuck" again.

> diff --git a/drivers/net/xen-netback/netback.c 
> b/drivers/net/xen-netback/netback.c
> index 560950e..bb65c7c 100644
> --- a/drivers/net/xen-netback/netback.c
> +++ b/drivers/net/xen-netback/netback.c
> @@ -63,6 +63,13 @@ module_param(separate_tx_rx_irq, bool, 0644);
>  static unsigned int fatal_skb_slots = FATAL_SKB_SLOTS_DEFAULT;
>  module_param(fatal_skb_slots, uint, 0444);
>  
> +/* When guest ring is filled up, qdisc queues the packets for us, but we have
> + * to timeout them, otherwise other guests' packets can get stucked there

"stuck"

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.