WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] MPI benchmark performance gap between native linux anddo

To: "xuehai zhang" <hai@xxxxxxxxxxxxxxx>, <Xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-devel] MPI benchmark performance gap between native linux anddomU
From: "Ian Pratt" <m+Ian.Pratt@xxxxxxxxxxxx>
Date: Tue, 5 Apr 2005 00:30:14 +0100
Delivery-date: Mon, 04 Apr 2005 23:30:13 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcU5bMT6NkR7fjeTQWSppVI/OndRYAAALSYQ
Thread-topic: [Xen-devel] MPI benchmark performance gap between native linux anddomU
> I did the following experiments to explore the MPI 
> application execution performance on both native linux 
> machines and inside of unpriviledged Xen user domains. I use 
> 8 machines with identical HW configurations (498.756 MHz dual 
> CPU, 512MB memory, on a 10MB/sec LAN) and I use Pallas MPI 
> Benchmarks (PMB).

> The expreiment results show, running a same MPI benchmark in 
> user domains usually results in a worse (sometimes very bad) 
> performance comparing with on native linux machines. The 
> following are the results for PMB SendRecv benchmark for both 
> experiments (table1 and table2 report throughput and latency 
> respectively). As you may notice, SendRecv can achieve a 
> 14.9MB/sec throughput on native linux machines but can get a 
> maximum 7.07 MB/sec throughput if running inside of user 
> domains. The latency results also have big gap.

> I will appreciate your help if you had the similar experience 
> and wanna share your insights.

Xen (or any kind of virtualization) is not particularly well suited to
MPI type applications, at least unless you're using Inifiniband or some
other smart NIC that avoids having to use dom0 to do the IO
virtualization.

However, the results you are seeing are lower than I'd expect.

Are you running dom0 and the domU on the same CPU or different CPUs. How
does changing this effect the results?

Also, are you sure the MTU is the same in all cases?

Further, please can you repeat the experiements with just a dom0 running
on each node.

Thanks,
Ian


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel