This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-users] domU has better I/O performance than dom0?

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-users] domU has better I/O performance than dom0?
From: Johann Spies <jspies@xxxxxxxxx>
Date: Wed, 28 Nov 2007 14:27:54 +0200
Delivery-date: Wed, 28 Nov 2007 04:28:45 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <pan.2007.>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <pan.2007.>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.17 (2007-11-01)
On Wed, Nov 28, 2007 at 11:38:31AM +0000, Troels Arvin wrote:
> I've done some I/O benchmarks on an RHEL 5.1-based xen setup. The main 
> (dom0) server is an x86_64 host with a FC-connected IBM SAN. The guest 
> servers are paravirtualized.
> I used bonnie++ to stress-test and to try to analyze I/O performance. 
> Bonnie++ was run with this command, current working directory being the 
> relevant part of the file system:
> bonnie++ -n 4 -s 20g -x 5 -u nobody

I have recently done some tests on three different domU's on different
but very similar physical servers and the results just did not make
any sense.  The test was the same, but the results were so different
that I did not even try to interpret it.

Here is what I did:

On all three we used a dedicated 100G partition to run the test on.

The commandline was

/usr/sbin/bonnie -d . -s 0.130 -n 8096 -r 8096

And the results (confusing):

Version  1.03       ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
mail1a(ext3)   8096  1822   5 21910   9   657   0  1846   5 14092   6   366   0
mail2a(ext3)   8096  4352   7   292   0   271   0  4242   6   178   0   153   0
mail2a(xfs)    8096   547  83  5833   2  1985   6   553  85   136   0    96   0
mail3a(ext3)   8096   501   0   166   0   131   0   512   0    74   0    49   0

Mail1a was on an quad-core CPU Dell 2950 and mail2a and mail3a on
2xdual-core CPU Dell 2950's.

Maybe the fact that the test ran on domU's with an underlying lvm has
something to do with the strange results.  

Johann Spies          Telefoon: 021-808 4036
Informasietegnologie, Universiteit van Stellenbosch

     "The earth is the LORD'S, and the fullness thereof; the
      world, and they that dwell therein."       Psalms 24:1

Xen-users mailing list