[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] net: allow configuration of the size of page in __netdev_alloc_frag



On Tue, Oct 30, 2012 at 12:53:09PM -0400, Konrad Rzeszutek Wilk wrote:
> On Wed, Oct 24, 2012 at 06:43:20PM +0200, Eric Dumazet wrote:
> > On Wed, 2012-10-24 at 17:22 +0100, Ian Campbell wrote:
> > > On Wed, 2012-10-24 at 16:21 +0100, Eric Dumazet wrote:
> > 
> > > > If you really have such problems, why locally generated TCP traffic
> > > > doesnt also have it ?
> > > 
> > > I think it does. The reason I noticed the original problem was that ssh
> > > to the machine was virtually (no pun intended) unusable.
> > > 
> > > > Your patch doesnt touch sk_page_frag_refill(), does it ?
> > > 
> > > That's right. It doesn't. When is (sk->sk_allocation & __GFP_WAIT) true?
> > > Is it possible I'm just not hitting that case?
> > > 
> > 
> > I hope not. GFP_KERNEL has __GFP_WAIT.
> > 
> > > Is it possible that this only affects certain traffic patterns (I only
> > > really tried ssh/scp and ping)? Or perhaps its just that the swiotlb is
> > > only broken in one corner case and not the other.
> > 
> > Could you try a netperf -t TCP_STREAM ?
> 
> For fun I did a couple of tests - I setup two machines (one r8168, the other
> e1000e) and tried to do netperf/netserver. Both of them are running a 
> baremetal
> kernel and one of them has 'iommu=soft swiotlb=force' to simulate the worst
> case. This is using v3.7-rc3.

I also did a test with the patch at the top, with the same setup and ... it
does look like it fixes some issues, but not the underlaying one.

The same test, with net.core.netdev_frag_page_max_order=0, the e1000e->r8169
gets ~124, but then on subsequent runs it picks up to ~933. If I let the
machine stay a bit idle and then do this again, it does around ~124 again.

Thoughts?

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.