[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] New MPI benchmark performance results (update)

Hi all,

In the following post I sent in early April (http://lists.xensource.com/archives/html/xen-devel/2005-04/msg00091.html), I reported some performance gap when running PMB SendRecv benchmark on both native Linux and domU. Now I've prepared a webpage comparing 8 PMB benchmarks' performance under 4 scenarios (native Linux, dom0, domU with SMP, and domU without SMP) at http://people.cs.uchicago.edu/~hai/vm1/vcluster/PMB/.

In the graphs presented on the webpage, we take the results of native Linux as the reference and normalize the other 3 scenarios to it. We observe a general pattern that usually dom0 has a better performance than domU with SMP than domU without SMP (here better performance means low latency and high throughput). However, we also notice very big performance gap between domU (w/o SMP) and native linux (or dom0 because generally dom0 has a very similar performance as native linux). Some distinct examples are: 8-node SendRecv latency (max domU/linux score ~ 18), 8-node Allgather latency (max domU/linux score ~ 17), and 8-node Alltoall latency (max domU/linux > 60). The performance difference in the last example is HUGE and we could not think about a reasonable explaination why transferring 512B message size is so much different than other sizes. We appreciate if you can provide your insight to such a big performance problem in these benchmarks.

BTW, all the benchmarking is based on the original Xen code. That is, we didn't modify the net_rx_action netback to kick the frontend after every packet as suggested by Ian in the following post (http://lists.xensource.com/archives/html/xen-devel/2005-04/msg00180.html)

Please let me know if you have any questions about the configuration of the benchmarking experiments. I am looking forward to your insightful explainations.



Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.