WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Xen Benchmarking guidelines

To: Keir Fraser <keir@xxxxxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-devel] Xen Benchmarking guidelines
From: "Nick L. Petroni Jr." <npetroni@xxxxxxxxxx>
Date: Wed, 15 Aug 2007 11:20:16 -0400 (EDT)
Cc: Michael Hicks <mwh@xxxxxxxxxx>
Delivery-date: Wed, 15 Aug 2007 08:20:31 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <C2E74840.1429C%keir@xxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <C2E74840.1429C%keir@xxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Hi,

It should be possible, and in fact difficult not, to get almost native
scores on SPECINT benchmarks from within an HVM guest. There's no I/O or
system activity at all -- it's just measuring raw CPU speed.

The most likely culprits are scheduling problems or time problems in the HVM
guest.

To discount scheduling issues, it's probably worth pinning your HVM VCPU to
a single physical CPU (and set the affinity of dom0 so that it *doesn't* run
on that physical CPU) and see if that helps.

I've run some additional tests, this time with the following settings:
xm vcpu-pin 0 0 0
xm vcpu-pin 1 0 1

The overall numbers were better. That is, the best and worst times were, in general, faster than before. However, the variance is still high. Here are my results for my Red Hat 7.3 HVM guest:

http://www.cs.umd.edu/~npetroni/xen_cpu_results/CINT2006.001.html

(NOTE: I forgot to update the configuration description before running, so it says this is Xen 3.0.2-2. It's actually 3.1.0)

I only ran four of the workloads (hence the "Invalid Run" wallpaper) and experienced the same trend as before -- after a few workloads, the numbers get worse. To be clear, the benchmark runs the workloads in column order, not row. So, the test goes: gcc, hmmer, sjeng, libquantum, gcc, hmmer, sjeng, ...

I thought this could be a guest scheduler issue of some sort, so I re-ran with a vanilla Fedora Core 6 (SELinux etc. disabled) HVM domain. Here are those results:

http://www.cs.umd.edu/~npetroni/xen_cpu_results/CINT2006.003.html

The trend, and some of the numbers, are nearly identical. After some time, the system just appears to degrade.

I'm a little stumped at this point, but I'm out of time to keep tracking down the issue, for this week anyway. In the mean time, I'm going to try running each workload separately with reboots in-between so I can at least get an idea of peak performance for each.

Take care and thanks,
nick

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>