[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Linux 5.0 regression: BUG: unable to handle kernel paging request at ffff888023e26778

On Sat, Feb 9, 2019 at 12:24 AM Sander Eikelenboom <linux@xxxxxxxxxxxxxx> wrote:
> I haven't got a reproducer so i might be hard to hit it again,
> system is AMD and this is from the host kernel running under
> the Xen hypervisor might it matter.

I think this is a Xen bug.

In particular, there's a few poison values in there that look like
zen. Like this:

   R10: deadbeefdeadf00d

looks like a special poison value that is from Xen itself.

It looks like the oops is around the TLB flushing code, looking at the
code it's the

        if (force_flush)
                flush_tlb_range(vma, old_end - len, old_end);
        if (new_ptl != old_ptl)

sequence in move_page_tables. The oopsing code sequence is

  28:* 48 89 45 00          mov    %rax,0x0(%rbp) <-- trapping instruction
  2c: 41 f6 46 52 40        testb  $0x40,0x52(%r14)

and that "testb $0x40" instruction that comes after the trapping
instruction is the

                           ((vma)->vm_flags & VM_HUGETLB)               \

from the flush_tlb_range() macro:

#define flush_tlb_range(vma, start, end)                                \
        flush_tlb_mm_range((vma)->vm_mm, start, end,                    \
                           ((vma)->vm_flags & VM_HUGETLB)               \
                                ? huge_page_shift(hstate_vma(vma))      \
                                : PAGE_SHIFT, false)

if I read that oops correctly.

I have no idea what that store to 0(%rbp) is for, though - I can't
line that up with anything I see with my own kernel config.

We *do* have changes to 5.0 in the move_page_tables() code (mremap on
a pmd level), so I'm cc'ing some of the people involved there, but
that odd poison value does make me wonder abut Xen issues. When I
google for that value, all I see is Xen reports (and your report for


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.