[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH for-4.5 v2] libxc: don't leak buffer containing the uncompressed PV kernel



On 11/21/2014 06:03 AM, Ian Campbell wrote:

So here's what happens now.
1. Starts up tiny
2. reboot: leak
3. reboot: freed (process larger, but the delta is all/mostly shared pages)
4. reboot: leak
5. reboot: freed
etc..
WTF, how very strange!
:-)


--- reboot domu ---

root@xen:~/xen-pkgs# ps aux | grep asterisk_deb80
root     22981  0.6  3.3 131652 20008 ?        SLsl 21:55   0:00 
/usr/lib/xen-4.4/bin/xl cr /etc/xen/auto/asterisk_deb80.cfg
root@xen:~/xen-pkgs# pmap -x 22981
22981:   /usr/lib/xen-4.4/bin/xl cr /etc/xen/auto/asterisk_deb80.cfg
Address           Kbytes     RSS   Dirty Mode  Mapping
0000000000400000     144     144       0 r-x-- xl
0000000000623000       4       4       4 r---- xl
0000000000624000       8       8       8 rw--- xl
0000000000626000       4       4       4 rw---   [ anon ]
00000000009a6000     288     288     288 rw---   [ anon ]
00000000009ee000   35676   16772   16772 rw---   [ anon ]
This is the (temporarily) leaked mapping, right?
Yea that's the one that popped in after the reboot..
About 16 MB.


Tried valgrind, it doesn't look like it was able to see what was going on
Indeed. The values for total heap usage at exist and still reachable etc
also don't seem to account for the ~3M of mapping on each iteration.

I don't know how glibc's allocator works, but I suppose it isn't
impossible that it is retaining some mappings of free regions and
collecting them to free later somehow, which just happens to only
trigger every other reboot (e.g. perhaps it is based on some threshold
of free memory).

...investigates...

So, http://man7.org/linux/man-pages/man3/malloc.3.html talks about
special behaviour using mmap for allocations above MMAP_THRESHOLD (128K
by default), which we will be hitting here I think. That explains the
anon mapping.

http://man7.org/linux/man-pages/man3/mallopt.3.html also talks about
various dynamic thresholds for growing and shrinking the heap. My guess
is that we are bouncing up and down over some threshold with every other
reboot.

Ian.
OK this is way over my head.. I don't have a full and precise understanding of all of the above, but let me try to comment nevertheless. There are two issues here. The added mappings (as I understand, files, non-anon, shared) do happen only on reboot, but they're not a real memory leak issue because they are shared with other processes, so no matter ho many xl processes we have, it's only another 2.6 or so MB added to the total memory usage of the server, right? On the other hand we have this anon area, 16 MB, that pops in on one reboot and gets freed on the next. That's the real issue, as I see it..! And all that only starts happening after the last line of valgrind output. Valgrind only had output up to the first boot of the VM, none later. For a fresh, non-rebooted domu, the xl process shows up as in top as ~588 KB RES, 0 SHR, how can the latter be anyway?? I don't understand.



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.