[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots properly when larger MTU sizes are used


  • To: Jan Beulich <JBeulich@xxxxxxxx>
  • From: "Palagummi, Siva" <Siva.Palagummi@xxxxxx>
  • Date: Tue, 14 Aug 2012 21:17:38 +0000
  • Accept-language: en-US
  • Cc: "xen-devel@xxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxx>
  • Delivery-date: Tue, 14 Aug 2012 21:18:16 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>
  • Thread-index: Ac146Cc2rIyVmg4PT4CbrdsCnnwOsQAF0kYAAFVDoZA=
  • Thread-topic: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots properly when larger MTU sizes are used


> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: Monday, August 13, 2012 1:58 PM
> To: Palagummi, Siva
> Cc: xen-devel@xxxxxxxxxxxxx
> Subject: Re: [Xen-devel] [PATCH RFC] xen/netback: Count ring slots
> properly when larger MTU sizes are used
> 
> >>> On 13.08.12 at 02:12, "Palagummi, Siva" <Siva.Palagummi@xxxxxx>
> wrote:
> >--- a/drivers/net/xen-netback/netback.c      2012-01-25 19:39:32.000000000
> -0500
> >+++ b/drivers/net/xen-netback/netback.c      2012-08-12 15:50:50.000000000
> -0400
> >@@ -623,6 +623,24 @@ static void xen_netbk_rx_action(struct x
> >
> >             count += nr_frags + 1;
> >
> >+            /*
> >+             * The logic here should be somewhat similar to
> >+             * xen_netbk_count_skb_slots. In case of larger MTU size,
> 
> Is there a reason why you can't simply use that function then?
> Afaict it's being used on the very same skb before it gets put on
> rx_queue already anyway.
> 

I did think about it. But this would mean iterating through similar piece of 
code twice with additional function calls. netbk_gop_skb-->netbk_gop_frag_copy 
sequence is actually executing similar code. And also not sure about any other 
implications. So decided to fix it by adding few lines of code in line.

> >+             * skb head length may be more than a PAGE_SIZE. We need to
> >+             * consider ring slots consumed by that data. If we do not,
> >+             * then within this loop itself we end up consuming more
> meta
> >+             * slots turning the BUG_ON below. With this fix we may end
> up
> >+             * iterating through xen_netbk_rx_action multiple times
> >+             * instead of crashing netback thread.
> >+             */
> >+
> >+
> >+            count += DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE);
> 
> This now over-accounts by one I think (due to the "+ 1" above;
> the calculation here really is to replace that increment).
> 
> Jan
> 
I also wasn't sure about the actual purpose of "+1" above whether it is meant 
to take care of skb_headlen or non zero gso_size case or some other case.  
That's why I left it like that so that I can exit the loop on safer side. If 
someone who knows this area of code can confirm that we do not need it, I will 
create a new patch. In my environment I did observe that "count" is always 
greater than 
actual meta slots produced because of this additional "+1" with my patch. When 
I took out this extra addition then count is always equal to actual meta slots 
produced and loop is exiting safely with more meta slots produced under heavy 
traffic.

Thanks
Siva
 
> >+
> >+            if (skb_shinfo(skb)->gso_size)
> >+                    count++;
> >+
> >             __skb_queue_tail(&rxq, skb);
> >
> >             /* Filled the batch queue? */
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.