[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH][XM-TEST] New hping TCP and UDP tests



Just for the record - these new tests don't cause any failures. The
error I was seeing was from 13_create_multinic_pos.py, which has been in
the tree for quite some time.

Sorry for the confusion.

Dan


On Mon, 2006-03-06 at 15:15 -0800, Daniel Stekloff wrote:
> Hi,
> 
> The included patch adds network tests that hping TCP and UDP packets. 
> 
> The new tests uncover some issues on HVM enabled systems with qemu-dm.
> The 13_network_domU_udp_pos.py test hits an error on both 32 and 64bit
> SMP systems. I hit a kernel BUG 2 out of 4 runs on my 64bit system.
> Unfortunately, I can't reproduce the bug on every run. I need to add
> more stressful tests.
> 
> We will file a bug for the error we've encountered. 
> 
> Here's dmesg output from the 64bit error encountered with
> 13_network_domU_udp_pos.py:
> 
> device vif627.8 left promiscuous mode
> xenbr0: port 8(vif627.8) entering disabled state
> Eeek! page_mapcount(page) went negative! (-1)
>   page->flags = 14
>   page->count = 1
>   page->mapping = 0000000000000000
> ----------- [cut here ] --------- [please bite here ] ---------
> Kernel BUG at mm/rmap.c:555
> invalid opcode: 0000 [1]
> CPU 0
> Modules linked in: video thermal processor fan button battery ac qla2300
> qla2xxx scsi_transport_fc
> Pid: 519, comm: qemu-dm Not tainted 2.6.16-rc5-xen0 #2
> RIP: e030:[<ffffffff8015d507>] <ffffffff8015d507>{page_remove_rmap+135}
> RSP: e02b:ffff880014067bc8  EFLAGS: 00010286
> RAX: 00000000ffffffff RBX: ffff8800014b5df8 RCX: 0000000000000000
> RDX: 00000000ffffff01 RSI: 000000000001926c RDI: ffffffff8052f260
> RBP: ffff880014067bd8 R08: 0000000000000000 R09: 0000000000000023
> R10: 000000000000001f R11: 000000000000001e R12: ffff88000d6ff568
> R13: 0000000000000000 R14: ffff8800014b5df8 R15: ffff88000a5acab0
> FS:  00002ae1efafbde0(0000) GS:ffffffff80654000(0000)
> knlGS:0000000000000000
> CS:  e033 DS: 0000 ES: 0000
> Process qemu-dm (pid: 519, threadinfo ffff880014066000, task
> ffff8800175f5180)
> Stack: ffff880014067bd8 00002aaaaacad000 ffff880014067cd8
> ffffffff80154684
>        0000000000000000 00002aaaaaeabfff 00002aaaaaeabfff
> 00002aaaaaeabfff
>        00000000a91a2067 ffffffff00000000
> Call Trace: <ffffffff80154684>{unmap_vmas+1764}
> <ffffffff8015abc2>{exit_mmap+98}       <ffffffff801244b3>{mmput+35}
> <ffffffff80128e1c>{exit_mm+204}
>        <ffffffff80129637>{do_exit+519}
> <ffffffff80130c22>{recalc_sigpending+18}
>        <ffffffff801312b9>{__dequeue_signal+441}
> <ffffffff80129d2d>{do_group_exit+205}
>        <ffffffff801330e9>{get_signal_to_deliver+1433}
> <ffffffff8010a22d>{do_signal+125}
>        <ffffffff801755b7>{pipe_readv+663} <ffffffff8017564e>{pipe_read
> +30}
>        <ffffffff8010af34>{sysret_signal+39}
> <ffffffff8010a9bf>{do_notify_resume+47}
>        <ffffffff8010b1d5>{ptregscall_common+61}
> 
> Code: 0f 0b 68 3c f6 4a 80 c2 2b 02 48 c7 c6 ff ff ff ff bf 20 00
> RIP <ffffffff8015d507>{page_remove_rmap+135} RSP <ffff880014067bc8>
>  <1>Fixing recursive fault but reboot is needed!
> Bad page state in process 'net.agent'
> page:ffff8800014b5df8 flags:0x0000000000000014 mapping:0000000000000000
> mapcount:-1 count:0
> Trying to fix it up, but a reboot is needed
> Backtrace:
> 
> Call Trace: <ffffffff8014a854>{free_hot_cold_page+228}
>        <ffffffff8014a38d>{bad_page+93}
> <ffffffff8014a80d>{free_hot_cold_page+157}
>        <ffffffff8014ac9b>{free_hot_page+11}
> <ffffffff8014b4a9>{__free_pages+41}
>        <ffffffff8011812c>{pte_free+476}
> <ffffffff80152c1d>{free_pgd_range+1133}
>        <ffffffff8015320b>{free_pgtables+139}
> <ffffffff8015abde>{exit_mmap+126}
>        <ffffffff801244b3>{mmput+35} <ffffffff80128e1c>{exit_mm+204}
>        <ffffffff80129637>{do_exit+519} <ffffffff80169494>{vfs_read+196}
>        <ffffffff80129d2d>{do_group_exit+205}
> <ffffffff80129d42>{sys_exit_group+18}
>        <ffffffff8010ae4d>{system_call+117}
> <ffffffff8010add8>{system_call+0}
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.