WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Massive iowait with Xen 3.2

To: "Fajar A. Nugraha" <fajar@xxxxxxxxx>
Subject: Re: [Xen-users] Massive iowait with Xen 3.2
From: Antoine Benkemoun <antoine.benkemoun@xxxxxxxxx>
Date: Tue, 28 Apr 2009 13:51:18 +0200
Cc: Xen List <xen-users@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Tue, 28 Apr 2009 04:52:06 -0700
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type; bh=g799zhPynvmV/KW+ljOM34LATGZNnGUbiPO8fx3HED0=; b=GZgzuskeAViAJpddTsXGJgTFgThCpbgRv9mp8qXEtpLtkZ1K88Ugc6DL56mxBiTwx3 21A3yfGWdW54rCspcFFPjJvmk7xsi5gkvH7wK00GFAh+e1fwu6WIc0RJKC4PY0uYDXDN Atd7lBP8BobN3WSHLUufwnuEfuyGTkwmxXuwg=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=H1jZK6nVwc4Rz2MV8+1jzT87DJ/+hvYGizpc1t7yxslH2OD3zzY9ewyF/yJ8/om4Ds jvPCT/+A3bVCdQc9PkvPSI3UXg0dC5KjWhIDdeIYH7L8IR3WL8tPHFbZ2Uo7k7F6WAho WGiyWo6rcxguDF7i8wpW3IIxZgoBvSLbUYWdM=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <7207d96f0904280154v4cad4bcal9cf9614a6b516472@xxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <fc1cbedb0904280039me501d77vf3e6d5318e14d927@xxxxxxxxxxxxxx> <7207d96f0904280154v4cad4bcal9cf9614a6b516472@xxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Hello,

Thank you for your answer.

The dom0 is a pretty minimal install of Debian with nothing else then Xen, SSH and fail2ban running. It can easily fit on 256MB of RAM. In fact, it averagely uses 100MB of RAM. It barely ever uses more then 1MB of swap idle. RHEL must be heavier, which wouldn't be too surprising.

I have reproduced the problem by generating IOs with 3 torrents and 1 file copy.

XM top shows :

    commun ------       2959    3.5     262144   12.7     262144      12.7     1    1  1828352  1567395    3    16267  3411443  1848929   32
  Domain-0 -----r       2617    2.7     262240   12.7   no limit       n/a     1    0        0        0    0        0        0        0   32

Load average for "commun", the domain being loaded, is above 5. Memory is only used at about 50% and only 440k of swap is used. Along goes the usual symptoms, really slow web page loads and really slow reactivity of about everything in the domain. Reactivity in the other domains is rather normal.

Meminfo for dm0 :

0rgy:/home/antoine# cat /proc/meminfo
MemTotal:       262340 kB
MemFree:         14284 kB
Buffers:         52060 kB
Cached:          89976 kB
SwapCached:         12 kB
Active:         142984 kB
Inactive:        43772 kB
HighTotal:           0 kB
HighFree:            0 kB
LowTotal:       262340 kB
LowFree:         14284 kB
SwapTotal:      522104 kB
SwapFree:       521976 kB
Dirty:             208 kB
Writeback:           0 kB
AnonPages:       44708 kB
Mapped:           7472 kB
Slab:            45184 kB
SReclaimable:    36780 kB
SUnreclaim:       8404 kB
PageTables:          0 kB
NFS_Unstable:        0 kB
Bounce:              0 kB
WritebackTmp:        0 kB
CommitLimit:    653272 kB
Committed_AS:   203720 kB
VmallocTotal:   589816 kB
VmallocUsed:      3860 kB
VmallocChunk:   585852 kB

Meminfo for the loaded domain :

antoine@commun:~$ cat /proc/meminfo
MemTotal:       262340 kB
MemFree:          4356 kB
Buffers:          1548 kB
Cached:         126500 kB
SwapCached:        264 kB
Active:         114924 kB
Inactive:       104188 kB
HighTotal:           0 kB
HighFree:            0 kB
LowTotal:       262340 kB
LowFree:          4356 kB
SwapTotal:      262136 kB
SwapFree:       261696 kB
Dirty:           11064 kB
Writeback:          16 kB
AnonPages:       91016 kB
Mapped:          21360 kB
Slab:            14236 kB
SReclaimable:     4876 kB
SUnreclaim:       9360 kB
PageTables:          0 kB
NFS_Unstable:        0 kB
Bounce:              0 kB
WritebackTmp:        0 kB
CommitLimit:    393304 kB
Committed_AS:   567828 kB
VmallocTotal:   589816 kB
VmallocUsed:      1864 kB
VmallocChunk:   587660 kB

Everything is PV. Does anybody have an idea as to what could be going wrong ?

Thank you in advance,

Antoine


On Tue, Apr 28, 2009 at 10:54 AM, Fajar A. Nugraha <fajar@xxxxxxxxx> wrote:
On Tue, Apr 28, 2009 at 2:39 PM, Antoine Benkemoun
<antoine.benkemoun@xxxxxxxxx> wrote:
> Hello,
>
> I have gotten my new Xen server working almost perfectly, yes... almost. The
> server in question is based on Debian Lenny and Xen has been installed from
> the Debian repos. It works perfectly fine with about 8 domains (it's getting
> a little cramped on 2GB of RAM but works fine).

Seriously?
My RHEL 5.3 domU, functioning as firewall only, uses about 200MB of
RAM, and it won't boot if I give it less than 250MB RAM (roughly). I'm
wondering how you can cramp all those domUs.

>
> The last problem is that when a domain starts to generate some I/O, there is
> massive iowait.

Could it be that all domUs are swapping due to high memory use?
what does /proc/meminfo from domU and dom0 looks like? What does "xm
top" output on dom0 looks like?

Another possibility is that I/O on your system sucks (which might
happen for HVM domU). What kind of domUs are you using, PV or HVM?

Regards,

Fajar

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
<Prev in Thread] Current Thread [Next in Thread>