hello,
thank you but that didn't help neither. another strange observation i
made is when do i an iperf within the domu i get the following results:
iperf to localhost 2.8 gbit
iperf to domu-real-ip 200 mbit
(see results at the end).
ciao,
artur
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
VM_OTRS:~# iperf -c 192.168.70.50
------------------------------------------------------------
Client connecting to 192.168.70.50, TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.70.50 port 41309 connected with 192.168.70.50 port
5001
[ 4] local 192.168.70.50 port 5001 connected with 192.168.70.50 port
41309
[ 3] 0.0-10.0 sec 239 MBytes 200 Mbits/sec
[ 4] 0.0-10.0 sec 239 MBytes 200 Mbits/sec
VM_OTRS:~# iperf -c 127.0.0.1
------------------------------------------------------------
Client connecting to 127.0.0.1, TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[ 3] local 127.0.0.1 port 56978 connected with 127.0.0.1 port 5001
[ 4] local 127.0.0.1 port 5001 connected with 127.0.0.1 port 56978
[ 3] 0.0-10.0 sec 3.25 GBytes 2.79 Gbits/sec
[ 4] 0.0-10.0 sec 3.25 GBytes 2.79 Gbits/sec
Am Donnerstag, den 09.03.2006, 09:14 -0500 schrieb Himanshu Raj:
> You will have to tune your TCP parameters like window sizes etc. I
don't have a
> ready ref on this - but try the following settings in your domU.
>
> Following sysctl parameters must be tuned in order to get gigE bw from
domUs.
> Either set via sysctl -w or put in /etc/sysctl.conf.
>
> # increase TCP maximum buffer size
> net.core.rmem_max=16777216
> net.core.wmem_max=16777216
>
> # increase Linux autotuning TCP buffer limits
> # min, default, and maximum number of bytes to use
> net.ipv4.tcp_rmem="4096 87380 16777216"
> net.ipv4.tcp_wmem="4096 65536 16777216"
>
> -Himanshu
>
> On Wed, Mar 08, 2006 at 02:27:29PM +0100, Artur Schiefer wrote:
> > hello philipp,
> >
> > this is my xentop output when i am runnig the test. do you achieve
> > higher rates (should only be limited by memory throughtput).
> >
> > ciao,
> > artur
> >
> > xentop - 14:21:58 Xen 3.0.1
> > 2 domains: 1 running, 0 blocked, 0 paused, 0 crashed, 0 dying, 0
> > shutdown
> > Mem: 1048104k total, 1039148k used, 8956k free CPUs: 4 @ 2799MHz
> > NAME STATE CPU(sec) CPU(%) MEM(k) MEM(%) MAXMEM(k)
> > MAXMEM(%) VCPUS NETS NETTX(k) NETRX(k) SSID
> > Domain-0 -----r 1170 109.3 885948 84.5 no limit
> > n/a 4 8 0 0 0
> > vm_apache ------ 690 72.4 131068 12.5 139264
> > 13.3 2 1 1196352 31262 0
> > Am Mittwoch, den 08.03.2006, 12:52 +0100 schrieb Philipp Jggi:
> > >
> > > Did you recoded the xentop output during your test? How much
mem-max
> > > do you have for dom0 and your domU?
> > >
> > > bye Philipp
> > >
> > >
> > >
> > >
> > > Artur Schiefer <aschiefer@xxxxxx>
> > > Sent by:
> > > xen-users-bounces@xxxxxxxxxxxxxxxxxxx
> > >
> > > 03/08/2006 12:47 PM
> > > Please respond to
> > > aschiefer@xxxxxx
> > >
> > >
> > >
> > >
> > > To
> > > xen-users
> > > <xen-users@xxxxxxxxxxxxxxxxxxx>
> > > cc
> > >
> > > Subject
> > > [Xen-users] Slow
> > > domU network
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > hello,
> > >
> > > when i test network performance between dom0 <-> domU (xen
3.0.1tha3
> > > debian patches)with iperf i get only about 450 Mbits of throughput
and
> > > lots of dropped packets on the vif-interface (bridged). In
opposite to
> > > that, when i run the same test beetween two dom0 with
bonded/teamed
> > > gigabit-nics i am able to achieve 1.6 Gbits of throughput.
> > > has someone made this obsevation as well (what is your
throughput)?
> > > any solutions ?
> > >
> > > cheers,
> > > artur
> > >
> > >
> > >
> > >
> > > _______________________________________________
> > > Xen-users mailing list
> > > Xen-users@xxxxxxxxxxxxxxxxxxx
> > > http://lists.xensource.com/xen-users
> > >
> >
> >
> > _______________________________________________
> > Xen-users mailing list
> > Xen-users@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/xen-users
>
--- Begin Message ---
hello,
thank you but that didn't help neither. another strange observation i
made is when do i an iperf within the domu i get the following results:
iperf to localhost 2.8 gbit
iperf to domu-real-ip 200 mbit
(see results at the end).
ciao,
artur
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
VM_OTRS:~# iperf -c 192.168.70.50
------------------------------------------------------------
Client connecting to 192.168.70.50, TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.70.50 port 41309 connected with 192.168.70.50 port
5001
[ 4] local 192.168.70.50 port 5001 connected with 192.168.70.50 port
41309
[ 3] 0.0-10.0 sec 239 MBytes 200 Mbits/sec
[ 4] 0.0-10.0 sec 239 MBytes 200 Mbits/sec
VM_OTRS:~# iperf -c 127.0.0.1
------------------------------------------------------------
Client connecting to 127.0.0.1, TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[ 3] local 127.0.0.1 port 56978 connected with 127.0.0.1 port 5001
[ 4] local 127.0.0.1 port 5001 connected with 127.0.0.1 port 56978
[ 3] 0.0-10.0 sec 3.25 GBytes 2.79 Gbits/sec
[ 4] 0.0-10.0 sec 3.25 GBytes 2.79 Gbits/sec
Am Donnerstag, den 09.03.2006, 09:14 -0500 schrieb Himanshu Raj:
> You will have to tune your TCP parameters like window sizes etc. I don't have
> a
> ready ref on this - but try the following settings in your domU.
>
> Following sysctl parameters must be tuned in order to get gigE bw from domUs.
> Either set via sysctl -w or put in /etc/sysctl.conf.
>
> # increase TCP maximum buffer size
> net.core.rmem_max=16777216
> net.core.wmem_max=16777216
>
> # increase Linux autotuning TCP buffer limits
> # min, default, and maximum number of bytes to use
> net.ipv4.tcp_rmem="4096 87380 16777216"
> net.ipv4.tcp_wmem="4096 65536 16777216"
>
> -Himanshu
>
> On Wed, Mar 08, 2006 at 02:27:29PM +0100, Artur Schiefer wrote:
> > hello philipp,
> >
> > this is my xentop output when i am runnig the test. do you achieve
> > higher rates (should only be limited by memory throughtput).
> >
> > ciao,
> > artur
> >
> > xentop - 14:21:58 Xen 3.0.1
> > 2 domains: 1 running, 0 blocked, 0 paused, 0 crashed, 0 dying, 0
> > shutdown
> > Mem: 1048104k total, 1039148k used, 8956k free CPUs: 4 @ 2799MHz
> > NAME STATE CPU(sec) CPU(%) MEM(k) MEM(%) MAXMEM(k)
> > MAXMEM(%) VCPUS NETS NETTX(k) NETRX(k) SSID
> > Domain-0 -----r 1170 109.3 885948 84.5 no limit
> > n/a 4 8 0 0 0
> > vm_apache ------ 690 72.4 131068 12.5 139264
> > 13.3 2 1 1196352 31262 0
> > Am Mittwoch, den 08.03.2006, 12:52 +0100 schrieb Philipp Jggi:
> > >
> > > Did you recoded the xentop output during your test? How much mem-max
> > > do you have for dom0 and your domU?
> > >
> > > bye Philipp
> > >
> > >
> > >
> > >
> > > Artur Schiefer <aschiefer@xxxxxx>
> > > Sent by:
> > > xen-users-bounces@xxxxxxxxxxxxxxxxxxx
> > >
> > > 03/08/2006 12:47 PM
> > > Please respond to
> > > aschiefer@xxxxxx
> > >
> > >
> > >
> > >
> > > To
> > > xen-users
> > > <xen-users@xxxxxxxxxxxxxxxxxxx>
> > > cc
> > >
> > > Subject
> > > [Xen-users] Slow
> > > domU network
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > hello,
> > >
> > > when i test network performance between dom0 <-> domU (xen 3.0.1tha3
> > > debian patches)with iperf i get only about 450 Mbits of throughput and
> > > lots of dropped packets on the vif-interface (bridged). In opposite to
> > > that, when i run the same test beetween two dom0 with bonded/teamed
> > > gigabit-nics i am able to achieve 1.6 Gbits of throughput.
> > > has someone made this obsevation as well (what is your throughput)?
> > > any solutions ?
> > >
> > > cheers,
> > > artur
> > >
> > >
> > >
> > >
> > > _______________________________________________
> > > Xen-users mailing list
> > > Xen-users@xxxxxxxxxxxxxxxxxxx
> > > http://lists.xensource.com/xen-users
> > >
> >
> >
> > _______________________________________________
> > Xen-users mailing list
> > Xen-users@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/xen-users
>
--- End Message ---
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|