WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

RE: [Xen-users] Benchmarking virtual and native servers - procedures, st

To: "Dirk Westfal" <dwestfal@xxxxxxxxxxxxxx>, <Xen-users@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-users] Benchmarking virtual and native servers - procedures, standards?
From: "Kraska, Joe A \(US SSA\)" <joe.kraska@xxxxxxxxxxxxxx>
Date: Mon, 19 Feb 2007 08:48:53 -0800
Delivery-date: Mon, 19 Feb 2007 08:49:23 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <540f44650702161201l7f268c40l9e9c5a4351748ee2@xxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcdUEhJAKmXpaCsFS52q9AE58HHX5QAM5H8A
Thread-topic: [Xen-users] Benchmarking virtual and native servers - procedures, standards?
> is there a recommended benchmark
> - a set of tests for disk and network io
> - cpu performance

http://www.cl.cam.ac.uk/research/srg/netos/xen/performance.html

As for comparing to other systems, I'd compare the benchmark
to native metal performance of your own metal, and contrast that
with a virtualized guest on the same metal with virtualization
technology of your choice.

Joe.



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>