[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] New MPI benchmark performance results (update)



Steven,

Thanks for the response.

Please let me know if you have any questions about the configuration
of the benchmarking experiments. I am looking forward to your
insightful explainations.


Erm, what version of Xen are you using for these? I notice that the dom0 kernel seems to be using 2.4.28 which is not current in any of the trees. Since you're using SMP guests, I'm guessing this is some old version of xen-unstable?

The Xen version is 2.0 for all the experiments. I am not sure if the SMP mentioned in my email is the same as "SMP guests" you mentioned. To clarify, "domU with SMP" I mentioned means Xen is booted with SMP support (no "nosmp" option) and I pin dom0 to the 1st CPU and pin domU to the 2nd CPU; "domU with no SMP" I mentioned means Xen is booted without SMP support (with "nosmp" option) and both dom0 and domU use the same single CPU.

Your results are kinda interesting but I think you'd probably be
better off trying to compare like with like so that we can isolate
the performance issues due to Xen/XenLinux, i.e.

I agree with your suggestion.

- use the same kernel (or ported kernel) in each case;

I will use 2.6 kernel for both dom0 and domU. For native linux, the current kernel version is 2.4 and I have to convince the cluster administrator to upgrade it to 2.6 for a fair comparison as you point out.

- use the same amount of memory in each case.

It is hard to use the same amount of memory, especially for domU memory because dom0 will occupy part of the 512MB physical memory. BTW, we think the memory is unlikely a key factor to the performance because the maximum message size is 4MB and we only test up to 8-node cluster (8 processes) and the memory will not be overallocated.

Otherwise you end up comparing 2.4 to 2.6, or 128MB/360MB/512MB, ...

Also you should probably use the current unstable tree since there
have been a number of performance fixes.

I will grab the current unstable tree and rerun the experiments by integrating the above configuration improvements. I will send a new result update when I finish.

Thanks again for the help.

Xuehai

cheers,

S.




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.