[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] How to dump vcpu regs when a domain is killed during xl create



On 19/08/14 11:51, manish jaggi wrote:
> On 19 August 2014 15:35, Andrew Cooper <andrew.cooper3@xxxxxxxxxx> wrote:
>> On 19/08/14 11:02, manish jaggi wrote:
>>> On 19 August 2014 14:31, Andrew Cooper <andrew.cooper3@xxxxxxxxxx> wrote:
>>>> On 19/08/14 06:11, manish jaggi wrote:
>>>>> Adding the question on xen-devel
>>>>>
>>>>> On 18 August 2014 14:08, manish jaggi <manishjaggi.oss@xxxxxxxxx> wrote:
>>>>>> I tried to start a domain using xl create, which showed some xen logs
>>>>>> and then hanged for a minute or so and then displayed a messgae
>>>>>> killed. Below is the log
>>>>>>
>>>>>> linux:~ # xl create domU.cfg
>>>>>> Parsing config from domU.cfg
>>>>>> (XEN)  ....
>>>>>> Killed
>>>>>>
>>>>>> Is there a way to know why it was killed and dump core regs (vcpu
>>>>>> regs) at the point it was killed.
>>>> There is no guarantee the domain has successfully started.
>>>>
>>>> Put "loglvl=all guest_loglvl=all" on the Xen command line, reboot, and
>>>> attach the results of
>>>>
>>>> xl -vvvv create domU.cfg
>>>>
>>>> and xl dmesg after the domain has failed in this way.
>>>>
>>>> ~Andrew
>>> Below is the log
>>> linux:~ # xl -vvv create domU.cfg -d -p
>>> Parsing config from domU.cfg
>>> {
>>>     "domid": null,
>>>     "config": {
>>>         "c_info": {
>>>             "type": "pv",
>>>             "name": "guest",
>>>             "uuid": "e5cb14f1-085d-4511-bca5-d6e3a0c35672",
>>>             "run_hotplug_scripts": "True"
>>>         },
>>>         "b_info": {
>>>             "max_vcpus": 1,
>>>             "avail_vcpus": [
>>>                 0
>>>             ],
>>>             "max_memkb": 262144,
>>>             "target_memkb": 262144,
>>>             "shadow_memkb": 3072,
>>>             "sched_params": {
>>>
>>>             },
>>>             "claim_mode": "True",
>>>             "type.pv": {
>>>                 "kernel": "/root/Image",
>>>                 "cmdline": "console=hvc0 root=/dev/xvda ro"
>>>             }
>>>         },
>>>         "disks": [
>>>             {
>>>                 "pdev_path": "/dev/loop0",
>>>                 "vdev": "xvda",
>>>                 "format": "raw",
>>>                 "readwrite": 1
>>>             }
>>>         ],
>>>         "on_reboot": "restart"
>>>     }
>>> }
>>>
>>> libxl: verbose:
>>> libxl_create.c:134:libxl__domain_build_info_setdefault: qemu-xen is
>>> unavailable, use qemu-xen-traditional instead: No such file or
>>> directory
>>> libxl: debug: libxl_create.c:1401:do_domain_create: ao 0xd297c20:
>>> create: how=(nil) callback=(nil) poller=0xd293d90
>>> libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk
>>> vdev=xvda spec.backend=unknown
>>> libxl: debug: libxl_device.c:280:libxl__device_disk_set_backend: Disk
>>> vdev=xvda, using backend phy
>>> libxl: debug: libxl_create.c:851:initiate_domain_create: running bootloader
>>> libxl: debug: libxl_bootloader.c:329:libxl__bootloader_run: no
>>> bootloader configured, using user supplied kernel
>>> libxl: debug: libxl_event.c:618:libxl__ev_xswatch_deregister: watch
>>> w=0xd294648: deregister unregistered
>>> domainbuilder: detail: xc_dom_allocate: cmdline="console=hvc0
>>> root=/dev/xvda ro", features="(null)"
>>> libxl: debug: libxl_dom.c:410:libxl__build_pv: pv kernel mapped 0 path
>>> /root/Image
>>> domainbuilder: detail: xc_dom_kernel_file: filename="/root/Image"
>>> domainbuilder: detail: xc_dom_malloc_filemap    : 8354 kB
>>> domainbuilder: detail: xc_dom_boot_xen_init: ver 4.5, caps
>>> xen-3.0-aarch64 xen-3.0-armv7l
>>> domainbuilder: detail: xc_dom_rambase_init: RAM starts at 40000
>>> domainbuilder: detail: xc_dom_parse_image: called
>>> domainbuilder: detail: xc_dom_find_loader: trying multiboot-binary loader 
>>> ...
>>> domainbuilder: detail: loader probe failed
>>> domainbuilder: detail: xc_dom_find_loader: trying Linux zImage (ARM64)
>>> loader ...
>>> domainbuilder: detail: loader probe OK
>>> domainbuilder: detail: xc_dom_parse_zimage64_kernel: called
>>> domainbuilder: detail: xc_dom_parse_zimage64_kernel: xen-3.0-aarch64:
>>> 0x40080000 -> 0x408a8b50
>>> libxl: debug: libxl_arm.c:474:libxl__arch_domain_init_hw_description:
>>> constructing DTB for Xen version 4.5 guest
>>> libxl: debug: libxl_arm.c:291:make_memory_nodes: Creating placeholder
>>> node /memory@40000000
>>> libxl: debug: libxl_arm.c:291:make_memory_nodes: Creating placeholder
>>> node /memory@200000000
>>> libxl: debug: libxl_arm.c:539:libxl__arch_domain_init_hw_description:
>>> fdt total size 1218
>>> domainbuilder: detail: xc_dom_devicetree_mem: called
>>> domainbuilder: detail: xc_dom_mem_init: mem 256 MB, pages 0x10000 pages, 4k 
>>> each
>>> domainbuilder: detail: xc_dom_mem_init: 0x10000 pages
>>> domainbuilder: detail: xc_dom_boot_mem_init: called
>>> domainbuilder: detail: set_mode: guest xen-3.0-aarch64, address size 6
>>> domainbuilder: detail: xc_dom_malloc            : 512 kB
>>> domainbuilder: detail: populate_guest_memory: populating RAM @
>>> 0000000040000000-0000000050000000 (256MB)
>>> domainbuilder: detail: populate_one_size: populated 0x80/0x80 entries
>>> with shift 9
>>> domainbuilder: detail: arch_setup_meminit: placing boot modules at 
>>> 0x48000000
>>> domainbuilder: detail: arch_setup_meminit: devicetree: 0x48000000 -> 
>>> 0x48001000
>>> libxl: debug: libxl_arm.c:570:finalise_one_memory_node: Populating
>>> placeholder node /memory@40000000
>>> libxl: debug: libxl_arm.c:564:finalise_one_memory_node: Nopping out
>>> placeholder node /memory@200000000
>>> domainbuilder: detail: xc_dom_build_image: called
>>> domainbuilder: detail: xc_dom_alloc_segment:   kernel       :
>>> 0x40080000 -> 0x408a9000  (pfn 0x40080 + 0x829 pages)
>>> Killed
>> That is only half of the items I asked for, but this indicates that
>> something in dom0 killed the domain builder while it was constructing
>> the domain.
>>
>> Try consulting dom0's dmesg.
>>
>> ~Andrew
> It appears the xl is killed by OOM killer. Her is the log. I hoep it
> is a common problem What is the usual suspect in these cases

Its not just xl which suffers the oomkiller.  The usual suspect here is
not having sufficient ram.  Try upping dom0's allocation.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.