This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


RE: [Xen-devel] OOM problems

>>> On 15.11.10 at 10:40, Daniel Stodden <daniel.stodden@xxxxxxxxxx> wrote:
> On Mon, 2010-11-15 at 03:55 -0500, Jan Beulich wrote:
>> >>> On 13.11.10 at 10:13, Ian Pratt <Ian.Pratt@xxxxxxxxxxxxx> wrote:
>> >>   > What do the guests use for storage? (e.g. "blktap2 for VHD files on
>> >> an iscsi mounted ext3 volume")
>> >> 
>> >> Simple sparse .img files on a local ext4 RAID volume, using "file:".
>> > 
>> > Ah, if you're using loop it may be that you're just filling memory with 
>> > dirty pages. Older kernels certainly did this, not sure about newer ones.
>> Shouldn't this lead to the calling process being throttled, instead of
>> the system running into OOM?
> They are throttled, but the single control I'm aware of
> is /proc/sys/vm/dirty_ratio (or dirty_bytes, nowadays). Which is only
> per process, not a global limit. Could well be that's part of the
> problem -- outwitting mm with just too many writers on too many cores?
> We had a bit of trouble when switching dom0 to 2.6.32, buffered writes
> made it much easier than with e.g. 2.6.27 to drive everybody else into
> costly reclaims.
> The Oom shown here reports about ~650M in dirty pages. The fact alone
> that this counts as on oom condition doesn't sound quite right in
> itself. That qemu might just have dared to ask at the wrong point in
> time.

Indeed - dirty pages alone shouldn't resolve to OOM.

> Just to get an idea -- how many guests did this box carry?

>From what we know this requires just a single (Windows 7 or some
such) guest, provided the guest has more memory than Dom0.


Xen-devel mailing list