[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH net 2/3] xen-netback: worse-case estimate in xenvif_rx_action is underestimating



> -----Original Message-----
> From: Ian Campbell
> Sent: 27 March 2014 12:28
> To: Paul Durrant
> Cc: xen-devel@xxxxxxxxxxxxx; netdev@xxxxxxxxxxxxxxx; Wei Liu; Sander
> Eikelenboom
> Subject: Re: [PATCH net 2/3] xen-netback: worse-case estimate in
> xenvif_rx_action is underestimating
> 
> On Thu, 2014-03-27 at 12:23 +0000, Paul Durrant wrote:
> > The worse-case estimate for skb ring slot usage in xenvif_rx_action()
> > fails to take fragment page_offset into account. The page_offset does,
> > however, affect the number of times the fragmentation code calls
> > start_new_rx_buffer() (i.e. consume another slot) and the worse-case
> > should assume that will always return true. This patch adds the page_offset
> > into the DIV_ROUND_UP for each frag.
> 
> At least for the copying mode wasn't the idea that you would copy to the
> start of the page, so the offset wasn't relevant? IOW is the real issue
> that start_new_rx_buffer is/was too aggressive?
> 
> Now that we do mapping though I suspect the offset becomes relevant
> again here and there is a 1:1 mapping from slots to frags again.
> 

We're always in copying mode. This is guest receive side :-)

> (I could have sworn David V got rid of all this precalculating stuff.)
> 

He did modify it. I got rid of it in favour of the best-case and worse-case 
estimations.

  Paul

> >
> > Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
> > Cc: Ian Campbell <ian.campbell@xxxxxxxxxx>
> > Cc: Wei Liu <wei.liu2@xxxxxxxxxx>
> > Cc: Sander Eikelenboom <linux@xxxxxxxxxxxxxx>
> > ---
> >  drivers/net/xen-netback/netback.c |   12 +++++++++++-
> >  1 file changed, 11 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-
> netback/netback.c
> > index befc413..ac35489 100644
> > --- a/drivers/net/xen-netback/netback.c
> > +++ b/drivers/net/xen-netback/netback.c
> > @@ -492,8 +492,18 @@ static void xenvif_rx_action(struct xenvif *vif)
> >                                             PAGE_SIZE);
> >             for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
> >                     unsigned int size;
> > +                   unsigned int offset;
> > +
> >                     size = skb_frag_size(&skb_shinfo(skb)->frags[i]);
> > -                   max_slots_needed += DIV_ROUND_UP(size,
> PAGE_SIZE);
> > +                   offset = skb_shinfo(skb)->frags[i].page_offset;
> > +
> > +                   /* For a worse-case estimate we need to factor in
> > +                    * the fragment page offset as this will affect the
> > +                    * number of times xenvif_gop_frag_copy() will
> > +                    * call start_new_rx_buffer().
> > +                    */
> > +                   max_slots_needed += DIV_ROUND_UP(offset + size,
> > +                                                    PAGE_SIZE);
> >             }
> >             if (skb_is_gso(skb) &&
> >                (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV4 ||
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.