|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] TSQ accounting skb->truesize degrades throughput for large packets
On Mon, 2013-09-09 at 22:41 +0100, Zoltan Kiss wrote:
> On 07/09/13 18:21, Eric Dumazet wrote:
> > On Fri, 2013-09-06 at 10:00 -0700, Eric Dumazet wrote:
> >> On Fri, 2013-09-06 at 17:36 +0100, Zoltan Kiss wrote:
> >>
> >>> So I guess it would be good to revisit the default value of this
> >>> setting.
> >>
> >> If ixgbe requires 3 TSO packets in TX ring to get line rate, you also
> >> can tweak dev->gso_max_size from 65535 to 64000.
> >
> > Another idea would be to no longer use tcp_limit_output_bytes but
> >
> > max(sk_pacing_rate / 1000, 2*MSS)
>
> I've tried this on a freshly updated upstream, and it solved my problem
> on ixgbe:
>
> - if (atomic_read(&sk->sk_wmem_alloc) >=
> sysctl_tcp_limit_output_bytes) {
> + if (atomic_read(&sk->sk_wmem_alloc) >=
> max(sk->sk_pacing_rate / 1000, 2 * mss_now) ){
>
> Now I can get proper line rate. Btw. I've tried to decrease
> dev->gso_max_size to 60K or 32K, both was ineffective.
Yeah, my own test was more like the following
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
index 7c83cb8..07dc77a 100644
--- a/net/ipv4/tcp_output.c
+++ b/net/ipv4/tcp_output.c
@@ -1872,7 +1872,8 @@ static bool tcp_write_xmit(struct sock *sk, unsigned int
mss_now, int nonagle,
/* TSQ : sk_wmem_alloc accounts skb truesize,
* including skb overhead. But thats OK.
*/
- if (atomic_read(&sk->sk_wmem_alloc) >=
sysctl_tcp_limit_output_bytes) {
+ if (atomic_read(&sk->sk_wmem_alloc) >= max(2 * mss_now,
+ sk->sk_pacing_rate
>> 8)) {
set_bit(TSQ_THROTTLED, &tp->tsq_flags);
break;
}
Note that it also seems to make Hystart happier.
I will send patches when all tests are green.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |