[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC V2] xen/netback: Count ring slots properly when larger MTU sizes are used



On Thu, Dec 20, 2012 at 10:05:29AM +0000, Ian Campbell wrote:
> On Tue, 2012-12-18 at 19:43 +0000, Matt Wilson wrote:
[...]
> > I see SKBs with:
> >   skb_headlen(skb) == 8157
> >   offset_in_page(skb->data) == 64
> > 
> > when handling long streaming ingress flows from ixgbe with MTU (on the
> > NIC and both sides of the VIF) set to 9000. When all the SKBs making
> > up the flow have the above property, xen-netback uses 3 pages instead
> > of two. The first buffer gets 4032 bytes copied into it. The next
> > buffer gets 4096 bytes copied into it. The final buffer gets 29 bytes
> > copied into it. See this post in the archives for a more detailed
> > walk through netbk_gop_frag_copy():
> >   http://lists.xen.org/archives/html/xen-devel/2012-12/msg00274.html
> 
> Thanks. This certainly seems wrong for the head bit.
> 
> > What's the down side to making start_new_rx_buffer() always try to
> > fill each buffer?
> 
> As we discussed earlier in the thread it doubles the number of copy ops
> per frag under some circumstances, my gut is that this isn't going to
> hurt but that's just my gut.
> 
> It seems obviously right that the linear part of the SKB should always
> fill entire buffers though. Perhaps the answer is to differentiate
> between the skb->data and the frags?

We've written a patch that does exactly that. It's stable and performs
well in our testing so far. I'll need to forward port it to the latest
Linux tree, test it there, and post.

Matt

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.