[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Spice-devel] [Qemu-devel] Qemu 2.0 regression with xen: qemu crash on any domUs S.O. start



Il 07/04/2014 15:19, Fabio Fantoni ha scritto:
Il 07/04/2014 12:20, Christophe Fergeau ha scritto:
On Mon, Apr 07, 2014 at 11:59:06AM +0200, Fabio Fantoni wrote:
Today I did some tests also with hvm and spice and I found another
segfault with different backtrace to solve:
(gdb) c
Continuing.

*Program received signal SIGSEGV, Segmentation fault.**
**0x0000555555855d30 in interface_client_monitors_config
(sin=0x5555563b0260, **
**    mc=0x0) at ui/spice-display.c:557**
**557         if (mc->num_of_monitors > 0) {*
(gdb) bt full
#0  0x0000555555855d30 in interface_client_monitors_config (
    sin=0x5555563b0260, mc=0x0) at ui/spice-display.c:557
        ssd = 0x5555563b0210
        info = {xoff = 0, yoff = 0, width = 0, height = 0}
        rc = 32767
        __func__ = "interface_client_monitors_config"
#1  0x00007ffff4af5113 in ?? ()
   from /usr/lib/x86_64-linux-gnu/libspice-server.so.1
No symbol table info available.
A backtrace with spice-server debugging symbols installed would be helpful.

Christophe

Sorry, the -dbg for spice-server on official debian packages is missing, now I created and installed also the -dbg package and this is the new backtrace:

(gdb) c
Continuing.

Program received signal SIGSEGV, Segmentation fault.
0x0000555555855d30 in interface_client_monitors_config (sin=0x5555563b0260,
    mc=0x0) at ui/spice-display.c:557
557         if (mc->num_of_monitors > 0) {
(gdb) bt full
#0  0x0000555555855d30 in interface_client_monitors_config (
    sin=0x5555563b0260, mc=0x0) at ui/spice-display.c:557
        ssd = 0x5555563b0210
        info = {xoff = 0, yoff = 0, width = 0, height = 0}
        rc = 32767
        __func__ = "interface_client_monitors_config"
#1  0x00007ffff4af5113 in red_dispatcher_use_client_monitors_config ()
    at red_dispatcher.c:318
        now = 0x5555563b0300
#2  0x00007ffff4ad87f5 in agent_msg_filter_process_data (
    filter=filter@entry=0x5555562eb0c4,
    data=data@entry=0x7fffe0280128 "\001", len=328, len@entry=348)
    at agent-msg-filter.c:95
msg_header = {protocol = <optimized out>, type = <optimized out>,
          opaque = <optimized out>, size = 328,
          data = 0x831fd4 <Address 0x831fd4 out of bounds>}
        __FUNCTION__ = "agent_msg_filter_process_data"
#3  0x00007ffff4b1af76 in reds_on_main_agent_data (mcc=0x555556326e70,
    message=0x7fffe0280128, size=348) at reds.c:1117
        dev_state = 0x5555562eb0a8
        header = <optimized out>
        res = <optimized out>
        __FUNCTION__ = "reds_on_main_agent_data"
#4 0x00007ffff4ae989a in main_channel_handle_parsed (rcc=0x555556326e70,
    size=<optimized out>, type=<optimized out>, message=0x7fffe0280128)
---Type <return> to continue, or q <return> to quit---
    at main_channel.c:911
        main_chan = 0x5555562ef2b0
        mcc = 0x555556326e70
        __FUNCTION__ = "main_channel_handle_parsed"
#5 0x00007ffff4aee470 in red_peer_handle_incoming (handler=0x55555632af80,
    stream=0x5555565adba0) at red_channel.c:287
        ret_handle = <optimized out>
        bytes_read = <optimized out>
        msg_type = 107
        parsed = <optimized out>
        parsed_free = 0x7ffff4ba8620 <nofree>
        msg_size = 348
#6  red_channel_client_receive (rcc=rcc@entry=0x555556326e70)
    at red_channel.c:309
No locals.
#7  0x00007ffff4af0d8c in red_channel_client_event (fd=<optimized out>,
    event=<optimized out>, data=0x555556326e70) at red_channel.c:1435
        rcc = 0x555556326e70
#8  0x0000555555851f82 in watch_read (opaque=0x55555666e0a0)
    at ui/spice-core.c:101
        watch = 0x55555666e0a0
#9 0x00005555557ce1f8 in qemu_iohandler_poll (pollfds=0x5555562e8e00, ret=1)
    at iohandler.c:143
        revents = 1
        pioh = 0x55555634e080
---Type <return> to continue, or q <return> to quit---
        ioh = 0x55555632fa30
#10 0x00005555557cf2a4 in main_loop_wait (nonblocking=0) at main-loop.c:485
        ret = 1
        timeout = 4294967295
        timeout_ns = 4237075
#11 0x000055555587acd8 in main_loop () at vl.c:2051
        nonblocking = false
        last_io = 1
#12 0x00005555558826b2 in main (argc=36, argv=0x7fffffffe358,
    envp=0x7fffffffe480) at vl.c:4507
        i = 64
        snapshot = 0
        linux_boot = 0
        icount_option = 0x0
        initrd_filename = 0x0
        kernel_filename = 0x0
        kernel_cmdline = 0x555555a1b5c4 ""
        boot_order = 0x5555562e7ee0 "dc"
        ds = 0x5555563d8fd0
        cyls = 0
        heads = 0
        secs = 0
        translation = 0
        hda_opts = 0x0
        opts = 0x5555562e7e30
---Type <return> to continue, or q <return> to quit---
        machine_opts = 0x5555562e84b0
        olist = 0x555555e00e00
        optind = 36
optarg = 0x7fffffffe915 "if=ide,index=1,media=cdrom,cache=writeback,id=ide-832"
        loadvm = 0x0
        machine_class = 0x5555562e02a0
        machine = 0x555555e067e0
        cpu_model = 0x0
        vga_model = 0x0
        qtest_chrdev = 0x0
        qtest_log = 0x0
        pid_file = 0x0
        incoming = 0x0
        show_vnc_port = 0
        defconfig = true
        userconfig = true
        log_mask = 0x0
        log_file = 0x0
        mem_trace = {malloc = 0x55555587e56a <malloc_and_trace>,
          realloc = 0x55555587e5c2 <realloc_and_trace>,
free = 0x55555587e629 <free_and_trace>, calloc = 0, try_malloc = 0,
          try_realloc = 0}
        trace_events = 0x0
        trace_file = 0x0
---Type <return> to continue, or q <return> to quit---
        __func__ = "main"
        args = {machine = 0x555555e067e0, ram_size = 2130706432,
          boot_order = 0x5555562e7ee0 "dc", kernel_filename = 0x0,
          kernel_cmdline = 0x555555a1b5c4 "", initrd_filename = 0x0,
          cpu_model = 0x0}
(gdb)

If you need more informations/tests tell me and I'll post them.

Thanks for any reply.

I see this patch:
http://git.qemu.org/?p=qemu.git;a=commit;h=dc491cfc14074064ed54a872b62cce6ca1330644

I tested it and segfault didn't happen anymore.
Thanks to Gerd Hoffmann for the fast fix.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.