[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Linux 5.13-rc6 regression to 5.12.x: kernel OOM and panic during kernel boot in low memory Xen VM's (256MB assigned memory).

On 17/06/2021 20:02, Sander Eikelenboom wrote:
On 17/06/2021 17:37, Rasmus Villemoes wrote:
On 17/06/2021 17.01, Linus Torvalds wrote:
On Thu, Jun 17, 2021 at 2:26 AM Sander Eikelenboom <linux@xxxxxxxxxxxxxx> wrote:

I just tried to upgrade and test the linux kernel going from the 5.12 kernel 
series to 5.13-rc6 on my homeserver with Xen, but ran in some trouble.

Some VM's boot fine (with more than 256MB memory assigned), but the smaller 
(memory wise) PVH ones crash during kernel boot due to OOM.
Booting VM's with 5.12(.9) kernel still works fine, also when dom0 is running 
5.13-rc6 (but it has more memory assigned, so that is not unexpected).

Adding Rasmus to the cc, because this looks kind of like the async
roofs population thing that caused some other oom issues too.

Yes, that looks like the same issue.

Rasmus? Original report here:


I do find it odd that we'd be running out of memory so early..

Indeed. It would be nice to know if these also reproduce with
initramfs_async=0 on the command line.

But what is even more curious is that in the other report
it seemed to trigger with _more_ memory - though I may be misreading
what Oliver was telling me:

please be noted that we use 'vmalloc=512M' for both parent and this
since it's ok on parent but oom on this commit, we want to send this
to show the potential problem of the commit on some cases.

we also tested by changing to use 'vmalloc=128M', it will succeed.

Those tests were done in a VM with 16G memory, and then he also wrote

we also tried to follow exactly above steps to test on
some local machine (8G memory), but cannot reproduce.

Are there some special rules for what memory pools PID1 versus the
kworker threads can dip into?

Side note: I also had a ppc64 report with different symptoms (the
initramfs was corrupted), but that turned out to also reproduce with
e7cb072eb98 reverted, so that is likely unrelated. But just FTR that
thread is here:


I choose to first finish the bisection attempt, not so suprising it ends up 
e7cb072eb988e46295512617c39d004f9e1c26f8 is the first bad commit

So at least that link is confirmed.

I also checked out booting with "initramfs_async=0" and now the guest boots 
with the 5.13-rc6-ish kernel which fails without that.


CC'ed Juergen.

Juergen, do you know how the direct kernel boot works and if that could 
with this commit ?

After reading the last part of the commit message e7cb072eb98 namely:

    Should one of the initcalls done after rootfs_initcall time (i.e., device_
    and late_ initcalls) need something from the initramfs (say, a kernel
    module or a firmware blob), it will simply wait for the initramfs
    unpacking to be done before proceeding, which should in theory make this
    completely safe.
But if some driver pokes around in the filesystem directly and not via one
    of the official kernel interfaces (i.e.  request_firmware*(),
    call_usermodehelper*) that theory may not hold - also, I certainly might
    have missed a spot when sprinkling wait_for_initramfs().  So there is an
    escape hatch in the form of an initramfs_async= command line parameter.

It dawned on me I'm using "direct kernel boot" functionality, which lets you 
boot a guest
were the kernel and initramfs get copied in from dom0, that works great, but 
perhaps it
pokes around as the last part of the commit message warns about ?

(I think the feature was called "direct kernel boot", what I mean is using the 
for example:
    kernel      = '/boot/vmlinuz-5.13.0-rc6-20210617-doflr-mac80211debug+'
    ramdisk     = '/boot/initrd.img-5.13.0-rc6-20210617-doflr-mac80211debug+'
    cmdline     = 'root=UUID=2f757320-caca-4215-868d-73a4aacf12aa ro nomodeset 
xen_blkfront.max_ring_page_order=1 console=hvc0 earlyprintk=xen 

options in the xen guest config file to boot the (in this case PVH) guest.




Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.