|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] FastModels Xen Crash - Guest data abort: Translation fault at level 2
On Mon, 2013-03-18 at 12:26 +0000, Sander Bogaert wrote:
> Hi,
>
>
> Using the latest Xen staging ( git://xenbits.xen.org/xen.git - staging
> ) and the lastest kernel for dom0
> ( git://github.com/torvalds/linux.git - master ) Xen crashes starting
> dom0. I tried 3 approaches:
>
>
> 1. mmc filesystem
> xen,dom0-bootargs = "earlyprintk=xenboot console=ttyAMA0
> mem=2048M root=/dev/mmcblk0 rw ip=dhcp"
> crashlog attached "xen_crash_1_mmc"
> 2. nfs filesystem
> xen,dom0-bootargs = "earlyprintk=xenboot console=ttyAMA1 rw
> root=/dev/nfs nfsroot=157.193.205.141:/srv/nfsrootmin ip=dhcp";
> crashlog attached "xen_crash_2_nfs"
> 3. ramdisk - this works!
Interesting. I'll concentrate on the first one since I know it works.
What version of the models are you running? What is your model command
line?
Where does your DTB come from? What hypervisor command line did it
include (it seems your logs only start after the early hypervisor bits,
please include everything if possible).
(XEN) Guest data abort: Translation fault at level 2
(XEN) gva=ef7ff000
(XEN) gpa=00000000af7ff000
(XEN) instruction syndrome invalid
(XEN) eat=0 cm=0 s1ptw=0 dfsc=6
(XEN) dom0 IPA 0x00000000af7ff000
0xaf7ff000 is a DRAM address. I noticed that the guest just logged
"Truncating memory at 0x80000000 to fit in 32-bit physical address
space". Might relate to how much memory the host and/or dom0 have been
given?
(XEN) P2M @ 02ffbfc0 mfn:0xffdfe
(XEN) 1ST[0x2] = 0x00000000ffdfb6ff
(XEN) 2ND[0x17b] = 0x0000000000000000
(XEN) ----[ Xen-4.3-unstable arm32 debug=y Not tainted ]----
(XEN) CPU: 0
(XEN) PC: c014c024
Can you translate 0xc014c024 into a line of the kernel? (e.g. with addr2line).
(XEN) CPSR: 200001d3 MODE:32-bit Guest SVC
Ian.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |