[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] crash tool - problem with new Xen linear virtual mapped sparse p2m list



On 23/11/15 21:18, Daniel Kiper wrote:
> Hi all,
> 
> Some time ago Linux kernel commit 054954eb051f35e74b75a566a96fe756015352c8
> (xen: switch to linear virtual mapped sparse p2m list) introduced linear
> virtual mapped sparse p2m list. It fixed some issues, however, it broke
> crash tool too. I tried to fix this issue but the problem looks more
> difficult than I expected.
> 
> Let's focus on "crash vmcore vmlinux". vmcore file was generated from dom0.
> "crash vmcore xen-syms" works without any issue.
> 
> At first sight problem looks simple. Just add a function which reads p2m list
> from vmcore and voila. I have done it. Then another issue arisen.
> 
> Please take a look at following backtrace:
> 
> #24426 0x000000000048b0f6 in readmem (addr=18446683600570023936, memtype=1, 
> buffer=0x3c0a060, size=4096,
>     type=0x900b2f "xen_p2m_addr page", error_handle=2) at memory.c:2157
> #24427 0x000000000050f746 in __xen_pvops_m2p_vma (machine=5323599872, 
> mfn=1299707) at kernel.c:9050
> #24428 0x000000000050edb7 in __xen_m2p (machine=5323599872, mfn=1299707) at 
> kernel.c:8867
> #24429 0x000000000050e948 in xen_m2p (machine=5323599872) at kernel.c:8796
> #24430 0x0000000000528fca in x86_64_kvtop_xen_wpt (tc=0x0, 
> kvaddr=18446683600570023936, paddr=0x7fff51c7c100, verbose=0)
>     at x86_64.c:1997
> #24431 0x0000000000528890 in x86_64_kvtop (tc=0x0, 
> kvaddr=18446683600570023936, paddr=0x7fff51c7c100, verbose=0)
>     at x86_64.c:1887
> #24432 0x000000000048d708 in kvtop (tc=0x0, kvaddr=18446683600570023936, 
> paddr=0x7fff51c7c100, verbose=0) at memory.c:2900
> #24433 0x000000000048b0f6 in readmem (addr=18446683600570023936, memtype=1, 
> buffer=0x3c0a060, size=4096,
>     type=0x900b2f "xen_p2m_addr page", error_handle=2) at memory.c:2157
> #24434 0x000000000050f746 in __xen_pvops_m2p_vma (machine=5323599872, 
> mfn=1299707) at kernel.c:9050
> #24435 0x000000000050edb7 in __xen_m2p (machine=5323599872, mfn=1299707) at 
> kernel.c:8867
> #24436 0x000000000050e948 in xen_m2p (machine=5323599872) at kernel.c:8796
> #24437 0x0000000000528fca in x86_64_kvtop_xen_wpt (tc=0x0, 
> kvaddr=18446683600570023936, paddr=0x7fff51c7ca60, verbose=0)
>     at x86_64.c:1997
> #24438 0x0000000000528890 in x86_64_kvtop (tc=0x0, 
> kvaddr=18446683600570023936, paddr=0x7fff51c7ca60, verbose=0)
>     at x86_64.c:1887
> #24439 0x000000000048d708 in kvtop (tc=0x0, kvaddr=18446683600570023936, 
> paddr=0x7fff51c7ca60, verbose=0) at memory.c:2900
> #24440 0x000000000048b0f6 in readmem (addr=18446683600570023936, memtype=1, 
> buffer=0x3c0a060, size=4096,
>     type=0x900b2f "xen_p2m_addr page", error_handle=2) at memory.c:2157
> #24441 0x000000000050f746 in __xen_pvops_m2p_vma (machine=6364917760, 
> mfn=1553935) at kernel.c:9050
> #24442 0x000000000050edb7 in __xen_m2p (machine=6364917760, mfn=1553935) at 
> kernel.c:8867
> #24443 0x000000000050e948 in xen_m2p (machine=6364917760) at kernel.c:8796
> #24444 0x0000000000528fca in x86_64_kvtop_xen_wpt (tc=0x0, 
> kvaddr=18446744072099176512, paddr=0x7fff51c7d3c0, verbose=0)
>     at x86_64.c:1997
> #24445 0x0000000000528890 in x86_64_kvtop (tc=0x0, 
> kvaddr=18446744072099176512, paddr=0x7fff51c7d3c0, verbose=0)
>     at x86_64.c:1887
> #24446 0x000000000048d708 in kvtop (tc=0x0, kvaddr=18446744072099176512, 
> paddr=0x7fff51c7d3c0, verbose=0) at memory.c:2900
> #24447 0x000000000048b0f6 in readmem (addr=18446744072099176512, memtype=1, 
> buffer=0xfbb500, size=768,
>     type=0x8fd772 "module struct", error_handle=6) at memory.c:2157
> #24448 0x00000000004fb0ab in module_init () at kernel.c:3355
> 
> As you can see module_init() calls readmem() which attempts to read virtual 
> address
> which lies outside of kernel text mapping (0xffffffff80000000 - 
> 0xffffffffa0000000).
> In this case addr=18446744072099176512 == 0xffffffffa003a040 which is known 
> as module
> mapping space. readmem() needs physical address, so, it calls kvtop() then 
> kvtop()
> calls x86_64_kvtop(). x86_64_kvtop() is not able to easily calculate, using 
> simple
> arithmetic like in case of kernel text mapping space, physical address from 
> virtual
> address. Hence it calls x86_64_kvtop_xen_wpt() to calculate it by traversing 
> page
> tables. x86_64_kvtop_xen_wpt() needs to do some m2p translation so it calls 
> xen_m2p()
> which calls __xen_m2p() and finally it calls __xen_pvops_m2p_vma() (my 
> function which
> tries to read linear virtual mapped sparse p2m list). Then 
> __xen_pvops_m2p_vma() calls
> readmem() which tries to read addr=18446683600570023936 == 0xffffc90000000000 
> which
> is VMA used for m2p list. Well, once again physical address must be 
> calculated by
> traversing page tables. However, this requires access to m2p list which leads 
> to
> another readmem() call. Starting from here we are in the loop. After 
> thousands of
> repetitions crash dies due to stack overflow. Not nice... :-(((
> 
> Do we have any viable fix for this issue? I considered a few but I have not 
> found prefect one.
> 
> 1) In theory we can use p2m tree to solve that problem because it is 
> available in parallel
>    with VMA mapped p2m right now. However, this is temporary solution and it 
> will be phased
>    out sooner or later. We need long term solution.
> 
> 2) As I saw crash tool creates xkd->p2m_mfn_frame_list from Xen 
> crash_xen_info_t.dom0_pfn_to_mfn_frame_list_list
>    in dom0 case. Potentially this would solve the problem for dom0 crash 
> dumps but it does
>    not solve the problem for PV guests in general. We need something which is 
> more generic.
> 
> 3) The best thing which I was able to think of is to put list of PFNs 
> containing p2m list
>    somewhere in ELF notes. Then readmem() called from __xen_pvops_m2p_vma() 
> could use this
>    physical addresses calculated from PFNs somehow. However, this is also not 
> perfect because
>    it requires changes in kernel, and/or xl and crash. Additionally, this 
> solution increases
>    ELF notes. Every 1 GiB of memory will add 2 MiB of PFNs to ELF note. 
> Probably there is
>    a chance that we can employ a compression method to reduce ELF note size 
> but...

What about:

4) Instead of relying on the kernel maintained p2m list for m2p
   conversion use the hypervisor maintained m2p list which should be
   available in the dump as well. This is the way the alive kernel is
   working, so mimic it during crash dump analysis.


Juergen


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.