WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] I/O descriptor ring size bottleneck?

To: Diwaker Gupta <diwakergupta@xxxxxxxxx>
Subject: Re: [Xen-devel] I/O descriptor ring size bottleneck?
From: Nivedita Singhvi <niv@xxxxxxxxxx>
Date: Mon, 21 Mar 2005 15:42:41 -0800
Cc: xen-devel@xxxxxxxxxxxxxxxxxxxxx
Delivery-date: Mon, 21 Mar 2005 23:44:12 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <1b0b4557050320134748a984df@xxxxxxxxxxxxxx>
List-archive: <http://sourceforge.net/mailarchive/forum.php?forum=xen-devel>
List-help: <mailto:xen-devel-request@lists.sourceforge.net?subject=help>
List-id: List for Xen developers <xen-devel.lists.sourceforge.net>
List-post: <mailto:xen-devel@lists.sourceforge.net>
List-subscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=subscribe>
List-unsubscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=unsubscribe>
References: <1b0b4557050320134748a984df@xxxxxxxxxxxxxx>
Sender: xen-devel-admin@xxxxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7.5) Gecko/20041217
Diwaker Gupta wrote:

Hi everyone,

I'm doing some networking experiments over high BDP topologies. Right
now the configuration is quite simple -- two Xen boxes connected via a
dummynet router. The dummynet router is set to limit bandwidth to
500Mbps and simulate an RTT of 80ms.

I'm using the following sysctl values:
net.ipv4.tcp_rmem = 4096        87380   4194304
net.ipv4.tcp_wmem = 4096        65536   4194304

If you're trying to tune TCP traffic, then you might
want to increase the default TCP socket size (87380) above
as well, as simply increasing the core size won't
help there.

Now if I run 50 netperf flows lasting 80 seconds (1000RTTs) from
inside  a VM on one box talking to the netserver on the VM on the
other box, I get a per flow throughput of around ~2.5Mbps (which
sucks, but lets ignore the absolute value for the moment).

If I run the same test, but this time from inside dom0, I get a per
flow throughput of around 6Mbps.

Could you get any further information on your test/data?
Which netperf test were you running, btw?

I'm trying to understand the difference in performance. It seems to me
that the I/O descriptor ring sizes are hard coded to 256 -- could that
be a bottleneck here? If not, have people experience similar problems?

Someone on this list had posted that they would be getting
oprofile working soon - you might want to retry your testing
with that patch.

thanks,
Nivedita



-------------------------------------------------------
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel