WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

[Xen-users] 100% iowait in domU with no IO tasks.

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-users] 100% iowait in domU with no IO tasks.
From: "Bogdan B. Rudas" <brudas@xxxxxxxxxxx>
Date: Thu, 8 May 2008 12:02:30 +0300
Delivery-date: Thu, 08 May 2008 02:01:28 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Organization: Iponweb
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Hi. 

I entered one of our domU tonight and see following problem: 

# iowait -k 5

avg-cpu:  %user   %nice %system %iowait  %steal   %idle

           0.00    0.00    0.00  100.00    0.00    0.00

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sda1              0.00         0.00         0.00          0          0
sda2              0.00         0.00         0.00          0          0

I check other domU and dom0 on same server - there were high iowait: from 20% 
to 50% with few transactions per second and about 0.1-1.5 Mb/sec storage 
bandwith usage per domU. This domU have no disk IO at all but show LA 15 and 
possible higher.

We have 6 domU on 8-core server with 7 FUJITSU  MBB2147RC SAS HDD on DELL PERC 
5/i RAID controller.

CentOS 5
Xen 3.1.0
Kernel 2.6.21

We suffer from this problem for a while on many our xen hosts, it is not a 
single-time issue.

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users