|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] xen/netfront: improve truesize tracking
On Mon, 2013-01-07 at 13:41 +0000, Ian Campbell wrote:
> > UDP UNIDIRECTIONAL SEND TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
> > 192.168.1.1 (192.168.1.1) port 0 AF_INET : +/-2.500% @ 95% conf. : demo
> > Socket Message Elapsed Messages
> > Size Size Time Okay Errors Throughput
> > bytes bytes secs # # KBytes/sec
> >
> > current 212992 65507 60.00 252586 0 269305.73
> > current 2280 60.00 229371 244553.96
> > patched 212992 65507 60.00 256209 0 273168.32
> > patched 2280 60.00 201670 215019.54
>
> The recv numbers here aren't too pleasing either.
The number of packets that can be queued into UDP socket depends on
sk->sk_rcvbuf (SO_RCVBUF) and skb truesize.
So what we notice here are packet drops (netstat -s would give us the
total counters)
To absorb a burst of incoming messages, an application would have to set
an appropriate receive buffer.
In this case, RCVBUF value was set to a very minimum, basically not
allowing more than one queued packet.
TCP stack has a 'collapse' mode, which basically converts skbs in
receive queue (or ofo queue) to better filled ones (skb->len very close
to skb->truesize) when memory limits are about to be hit.
Its very expensive, as it adds one more copy stage, but it happens only
in rare circumstances. Of course, when a driver uses one page of 4096
bytes to store one 1514 bytes ethernet frame, it can happen more often.
netstat -s | grep collap
25292 packets collapsed in receive queue due to low socket buffer
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |