[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Dom0 kernel panic when porting xen to new arm soc





On 25/06/2015 16:09, Peng Fan wrote:
Hi Julien,

Hi,

On 6/23/2015 9:56 PM, Peng Fan wrote:
Hi,

On 6/23/2015 9:36 PM, Julien Grall wrote:
Hi,

On 23/06/15 14:03, Peng Fan wrote:
I did not enable LPAE for DOM0 kernel, use shor page table.
Following is the full log from uboot to kernel with DOM0 512M:

Which CONFIG_VMSPLIT_* do you use? Can you try to use another one? I
remembered it had some effect on the offset between physical and virtual
address.
  CONFIG_VMSPLIT_2G=y

CONFIG_PAGE_OFFSET=0x80000000

ok. Will try 3G:1G split.
Later I'll reply with log info about this(Do not have log at hand).
Still panic, maybe something wrong with gnutab configration in my side,
I use default gnutab address/size.

Did you check that the gnttab doesn't overlap a device/RAM region of your hardware?

I've posted a patch a week ago to find automatically a region for the grant table [1] in DOM0 memory. It will avoid you to go through the datasheet.

Current I met a DomU boot issue, if do not use blk backend, DomU can
boot with ramfs as rootfs. If use an image file as rootfs, DomU can not
boot.

I am not sure why this happends:"
libxl: error: libxl_create.c:1195:domcreate_launch_dm: unable to add
disk devices
libxl: error: libxl_device.c:799:libxl__initiate_device_remove: unable
to get my domid
"
By using gdb, I found domcreate_launch_dm fails to get domid, then it
reports unable to add disk devices. I am not familiar with xenstore and
etc. Did I miss some configuation?

The "unable to get my domid" looks like an issue with xenstore. Is xenstored running?

Also, did you built you DOM0 kernel with CONFIG_XEN_BLKDEV_BACKEND?

Regards,

[1] http://lists.xen.org/archives/html/xen-devel/2015-06/msg02831.html

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.