WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Big I/O performance difference between dom0 and domU

To: "Marcin Owsiany" <marcin@xxxxxxxxxx>, "Xen-Users" <xen-users@xxxxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-users] Big I/O performance difference between dom0 and domU
From: "Liang Yang" <multisyncfe991@xxxxxxxxxxx>
Date: Tue, 17 Apr 2007 09:34:10 -0700
Delivery-date: Tue, 17 Apr 2007 09:33:00 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <20070417152852.GA6662@kufelek>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
What is the CPU utilization when you did this I/O performance measurement?
As far I can remember, Bonnie++ does not support using outstanding I/Os.
So your target RAID volume may not be saturated, i.e. you did not get the peak
performance.

Liang

----- Original Message ----- From: "Marcin Owsiany" <marcin@xxxxxxxxxx>
To: "Xen-Users" <xen-users@xxxxxxxxxxxxxxxxxxx>
Sent: Tuesday, April 17, 2007 8:28 AM
Subject: [Xen-users] Big I/O performance difference between dom0 and domU


Hi,

I am setting up a dual CPU PowerEdge 2550 system with PERC 3Di
controller (aacraid) with 3 18GB disks in RAID5 and XEN 3.0.3-0-2,
credit scheduler, PAE (the Debian package in etch). This is not best
hardware, but what worries me more is the poor I/O performance in domU
compared to dom0.

With both domains having 500 MB of RAM, testing with bonnie++ on the
same 5GB LVM volume with xfs filesystem, with 4GB test data size, I'm
getting:

-----+-------------------+--------------------+-----------------------
Test | block read [kB/s] | block write [kB/s] | random seeks [/sec]
-----+-------------------+--------------------+-----------------------
dom0 | 10308             | 64806              | 325.3
domU |  7299             | 53469              | 265.6
-----+-------------------+--------------------+-----------------------
~drop|    30%            |    17%             |  18%
-----+-------------------+--------------------+-----------------------

The results are basically the same whether I use the default vcpu
arrangement (2 vcpus for dom0, one for domU) or set it to one vcpu per
domain, each pinned to a different physical cpu.

Any suggestions are welcome...


--
Marcin Owsiany <marcin@xxxxxxxxxx>              http://marcin.owsiany.pl/
GnuPG: 1024D/60F41216  FE67 DA2D 0ACA FC5E 3F75  D6F6 3A0D 8AA0 60F4 1216

"Every program in development at MIT expands until it can read mail."
                                                             -- Unknown

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users