WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] dom0 - oom-killer - memory leak somewhere ?

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-users] dom0 - oom-killer - memory leak somewhere ?
From: Adrien Urban <adrien.urban@xxxxxxxxxxxxxx>
Date: Sun, 13 Nov 2011 13:27:27 +0100
Delivery-date: Sun, 13 Nov 2011 04:29:02 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <CAG1y0scnGvjK1oSzLEDhx8t4w9Mcbo3XcJczFh_YnzKPeP1-UA@xxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <4EBBC9E3.7000909@xxxxxxxxxxxxxx> <4EBF8DEB.1050804@xxxxxxxxxxxxxx> <CAG1y0scnGvjK1oSzLEDhx8t4w9Mcbo3XcJczFh_YnzKPeP1-UA@xxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.18) Gecko/20110626 Iceowl/1.0b2 Icedove/3.1.11
On 11/13/11 12:19, Fajar A. Nugraha wrote:
> > On Sun, Nov 13, 2011 at 4:29 PM, Adrien Urban
> > <adrien.urban@xxxxxxxxxxxxxx> wrote:
>> >>
>> >> Hello,
>> >>
>> >> I work in a hosting company, we have tens of Xen dom0 running just
fine,
>> >> but unfortunately we do have a few that get out of control.
>> >>
>> >> Reported behaviour :
>> >> - dom0 uses more and more memory
>> >> - no process can be found using that memory
> >
> > Does the dom0 also serve as some kind of file server (e.g. nfs, web)
> > with lost of files (e.g. several hundred GBs)?
> >
> > If yes, you might need to set /proc/sys/vm/vfs_cache_pressure to 1000
> > (or any value over 100). More details:
> >
http://rackerhacker.com/2008/12/03/reducing-inode-and-dentry-caches-to-keep-oom-killer-at-bay/
> >

Hello,

Thanks to point it out, unfortunately, it doesn't seem to be the case.

The Dom0 don't have anything running but xen, lvm, and some monitoring
services (munin, and nrpe).

I tried to do what is mentioned on this page, but didn't get any real
down in memory usage.


I tried to check slabs and compare between 2 hosts, one working, and the
other not. Couldn't find anything off the chart for our memory leak.


Checking /proc/slabinfo, header states :

# name            <active_objs> <num_objs> <objsize> <objperslab>
<pagesperslab> : tunables <limit> <batchcount> <sharedfactor> : slabdata
<active_slabs> <num_slabs> <sharedavail>


I tried to take :
- pagesperslab * active_slabs
- sum up all lines

That should get a total of pages in slabs.



Working host :
# free
             total       used       free     shared    buffers     cached
Mem:        520028     478308      41720          0      72792     204828
-/+ buffers/cache:     200688     319340
Swap:      1044472       6380    1038092


Total pages in slabs: 8416


Non-working host :
# free
             total       used       free     shared    buffers     cached
Mem:       2092892    1501968     590924          0       2296      25200
-/+ buffers/cache:    1474472     618420
Swap:      1998840          0    1998840

Total pages in slabs : 5855


I think those pages are 4k pages. That means slabs are using 34M or 24M
on those hosts. Nothing like the 1GB i can't find.


Regards,
Adrien

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users