[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [qemu-upstream-4.2-testing test] 16779: regressions - FAIL



Alex Bligh writes ("Re: [qemu-upstream-4.2-testing test] 16779: regressions - 
FAIL"):
> In the original patch (below) it called memory_region_set_dirty, which
> doesn't exist in 4.2, with the parameter 'framebuffer', which also doesn't
> exist in 4.2. This only gets called if xc_hvm_track_dirty_vram returns
> an error other than ENODATA. I had presumed the intent was to mark the
> whole framebuffer as dirty if xc_hvm_track_dirty_vram failed this way.

I have managed to get a stack trace out of this crash.  See below.

Looking at the code we seem to be calling
cpu_physical_memory_set_dirty with an invalid address.  It has no
checking and directly uses the supplied address (shifted to index one
byte per page) in an array lookup.

cpu_physical_*'s array for this, ram_list.phys_dirty, is allocated in
find_ram_offset and qemu_ram_alloc_from_ptr.

It seems likely to me that the code in xen_sync_dirty_bitmap which
calls cpu_physical_memory_set_dirty should only be entered, at least
via this path, with a valid ram address, as it is itself being called
from cpu_physical_sync_dirty_bitmap (the assumptions here aren't
stated but it seems reasonable to assume that all the ram_addr_t's
supplied to cpu_physical_* are supposed to be valid ram) ?

But it seems that this is not the case.  At least, the address here is
not in any of the blocks in ram_list.

I'm afraid I have no idea whether the right fix is for
xen_sync_dirty_bitmap to somehow check whether the address is
relevant, or whether the bug is higher up in the call chain.

Ian.


Program received signal SIGSEGV, Segmentation fault.
0x082483a2 in cpu_physical_memory_set_dirty (addr=4026531840)
    at /u/iwj/work/1/qemu-upstream-4.2-testing/cpu-all.h:540
540     /u/iwj/work/1/qemu-upstream-4.2-testing/cpu-all.h: No such file or 
directory.
        in /u/iwj/work/1/qemu-upstream-4.2-testing/cpu-all.h
(gdb) bt
#0  0x082483a2 in cpu_physical_memory_set_dirty (addr=4026531840)
    at /u/iwj/work/1/qemu-upstream-4.2-testing/cpu-all.h:540
#1  0x08249735 in xen_sync_dirty_bitmap (state=0x946e250, 
start_addr=4026531840, size=8388608)
    at /u/iwj/work/1/qemu-upstream-4.2-testing/xen-all.c:481
#2  0x08249953 in xen_client_sync_dirty_bitmap (client=0x946e270, 
start_addr=4026531840, end_addr=4034920448)
    at /u/iwj/work/1/qemu-upstream-4.2-testing/xen-all.c:527
#3  0x081b43cb in cpu_notify_sync_dirty_bitmap (start=4026531840, 
end=4034920448)
    at /u/iwj/work/1/qemu-upstream-4.2-testing/exec.c:1751
#4  0x081b506d in cpu_physical_sync_dirty_bitmap (start_addr=4026531840, 
end_addr=4034920448)
    at /u/iwj/work/1/qemu-upstream-4.2-testing/exec.c:2139
#5  0x081df005 in as_memory_range_del (as=0x832783c, fr=0x948da30)
    at /u/iwj/work/1/qemu-upstream-4.2-testing/memory.c:336
#6  0x081e0b84 in address_space_update_topology_pass (as=0x832783c, 
old_view=..., new_view=..., adding=false)
    at /u/iwj/work/1/qemu-upstream-4.2-testing/memory.c:711
#7  0x081e0ccd in address_space_update_topology (as=0x832783c)
    at /u/iwj/work/1/qemu-upstream-4.2-testing/memory.c:746
#8  0x081e0d5b in memory_region_update_topology () at 
/u/iwj/work/1/qemu-upstream-4.2-testing/memory.c:761
#9  0x081e2aa5 in memory_region_del_subregion (mr=0x969ed38, 
subregion=0x96b4628)
    at /u/iwj/work/1/qemu-upstream-4.2-testing/memory.c:1304
#10 0x080e8a33 in pci_update_mappings (d=0x96a3b00) at 
/u/iwj/work/1/qemu-upstream-4.2-testing/hw/pci.c:997
#11 0x080e8d86 in pci_default_write_config (d=0x96a3b00, addr=4, val=0, l=4)
    at /u/iwj/work/1/qemu-upstream-4.2-testing/hw/pci.c:1050
#12 0x080ebbbe in pci_host_config_write_common (pci_dev=0x96a3b00, addr=4, 
limit=256, val=0, len=4)
    at /u/iwj/work/1/qemu-upstream-4.2-testing/hw/pci_host.c:54
#13 0x080ebc80 in pci_data_write (s=0x969fe98, addr=2147487748, val=0, len=4)
    at /u/iwj/work/1/qemu-upstream-4.2-testing/hw/pci_host.c:75
#14 0x080ebda7 in pci_host_data_write (opaque=0x969f018, addr=0, val=0, len=4)
    at /u/iwj/work/1/qemu-upstream-4.2-testing/hw/pci_host.c:125
#15 0x081ded13 in memory_region_write_accessor (opaque=0x969fdf4, addr=0, 
value=0xbfc1a700, size=4, shift=0, 
    mask=4294967295) at /u/iwj/work/1/qemu-upstream-4.2-testing/memory.c:265
#16 0x081dedde in access_with_adjusted_size (addr=0, value=0xbfc1a700, size=4, 
access_size_min=1, 
    access_size_max=4, access=0x81dec8f <memory_region_write_accessor>, 
opaque=0x969fdf4)
    at /u/iwj/work/1/qemu-upstream-4.2-testing/memory.c:295
#17 0x081df6ed in memory_region_iorange_write (iorange=0x969fe30, offset=0, 
width=4, data=0)
    at /u/iwj/work/1/qemu-upstream-4.2-testing/memory.c:456
#18 0x081db000 in ioport_writel_thunk (opaque=0x969fe30, addr=3324, data=0)
    at /u/iwj/work/1/qemu-upstream-4.2-testing/ioport.c:225
#19 0x081dabaf in ioport_write (index=2, address=3324, data=0)
    at /u/iwj/work/1/qemu-upstream-4.2-testing/ioport.c:82
#20 0x081db319 in cpu_outl (addr=3324, val=0) at 
/u/iwj/work/1/qemu-upstream-4.2-testing/ioport.c:288
#21 0x08249bd0 in do_outp (addr=3324, size=4, val=0) at 
/u/iwj/work/1/qemu-upstream-4.2-testing/xen-all.c:653
#22 0x08249d5d in cpu_ioreq_pio (req=0xb7798000) at 
/u/iwj/work/1/qemu-upstream-4.2-testing/xen-all.c:680
#23 0x0824a25c in handle_ioreq (req=0xb7798000) at 
/u/iwj/work/1/qemu-upstream-4.2-testing/xen-all.c:748
#24 0x0824a539 in cpu_handle_ioreq (opaque=0x946e250) at 
/u/iwj/work/1/qemu-upstream-4.2-testing/xen-all.c:823
#25 0x080a10f6 in qemu_iohandler_poll (readfds=0xbfc1aa90, writefds=0xbfc1aa10, 
xfds=0xbfc1a990, ret=1)
    at /u/iwj/work/1/qemu-upstream-4.2-testing/iohandler.c:121
#26 0x081177d7 in main_loop_wait (nonblocking=1) at 
/u/iwj/work/1/qemu-upstream-4.2-testing/main-loop.c:464
#27 0x0810e16f in main_loop () at 
/u/iwj/work/1/qemu-upstream-4.2-testing/vl.c:1481
#28 0x08112e28 in main (argc=36, argv=0xbfc1aec4, envp=0xbfc1af58)
    at /u/iwj/work/1/qemu-upstream-4.2-testing/vl.c:3485
(gdb) 

(gdb) print ram_list
$1 = {
  phys_dirty = 0x96b6ee8 
"\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377\377"...,
 blocks = {lh_first = 0x946e748}}
(gdb) print ram_list.blocks
$2 = {lh_first = 0x946e748}
...
(gdb) print *ram_list.blocks.lh_first
$4 = {host = 0x0, offset = 0, length = 528482304, flags = 0, idstr = "xen.ram", 
'\000' <repeats 248 times>, 
  next = {le_next = 0x948c228, le_prev = 0x8745768}, fd = 0}
...
(gdb) print *ram_list.blocks.lh_first->next.le_next
$5 = {host = 0xb4b62000 <Address 0xb4b62000 out of bounds>, offset = 536936448, 
length = 65536, flags = 0, 
  idstr = "0000:00:04.0/rtl8139.rom", '\000' <repeats 231 times>, next = 
{le_next = 0x96a28f8, 
    le_prev = 0x946e860}, fd = 0}
(gdb) print *ram_list.blocks.lh_first->next.le_next->next.le_next
$6 = {host = 0xb5cf9000 <Address 0xb5cf9000 out of bounds>, offset = 536870912, 
length = 65536, flags = 0, 
  idstr = "0000:00:02.0/cirrus_vga.rom", '\000' <repeats 228 times>, next = 
{le_next = 0x96b6dc0, 
    le_prev = 0x948c340}, fd = 0}
(gdb) print *ram_list.blocks.lh_first->next.le_next->next.le_next->next.le_next
$7 = {host = 0xb5e37000 <Address 0xb5e37000 out of bounds>, offset = 528482304, 
length = 8388608, flags = 0, 
  idstr = "vga.vram", '\000' <repeats 247 times>, next = {le_next = 0x0, 
le_prev = 0x96a2a10}, fd = 0}
(gdb)

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.