WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [xen-4.0.1-rc5-pre] [pvops 2.6.32.16] Complete freeze wi

To: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Subject: Re: [Xen-devel] [xen-4.0.1-rc5-pre] [pvops 2.6.32.16] Complete freeze within 2 days, no info in serial log
From: Sander Eikelenboom <linux@xxxxxxxxxxxxxx>
Date: Sun, 8 Aug 2010 18:57:05 +0200
Cc: Jeremy Fitzhardinge <jeremy@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Delivery-date: Sun, 08 Aug 2010 09:58:23 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20100806151743.GB4324@xxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: Eikelenboom IT services
References: <698099271.20100803173057@xxxxxxxxxxxxxx> <20100803154541.GA16122@xxxxxxxxxxxxxxxxxxx> <4C583AFE.7080001@xxxxxxxx> <1048476317.20100805114844@xxxxxxxxxxxxxx> <20100805145214.GC5697@xxxxxxxxxxxxxxxxxxx> <425016991.20100806112111@xxxxxxxxxxxxxx> <20100806151743.GB4324@xxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Hi Konrad,

This time the grabbing application became an hung application again in the VM, 
it seems you are right, available mem is down to 0.
It always used to work with 512mb assigned to the domain.

Most probably a bug in the xhci code i assume ?

Attached: Some hopefully relevant data from /proc


--
Sander



Aug  8 20:16:17 security kernel: [  721.555787] BUG: soft lockup - CPU#0 stuck 
for 82s! [kmemleak:374]
Aug  8 20:16:17 security kernel: [  721.555790] Modules linked in: fuse saa7115 
em28xx v4l2_common videodev v4l1_compat v4l2_compat_ioctl32 videobuf_vmalloc 
videobuf_core tveeprom evdev i2c_core pcspkr thermal_sys [last unloaded: 
scsi_wait_scan]
Aug  8 20:16:17 security kernel: [  721.555814] CPU 0 
Aug  8 20:16:17 security kernel: [  721.555816] Modules linked in: fuse saa7115 
em28xx v4l2_common videodev v4l1_compat v4l2_compat_ioctl32 videobuf_vmalloc 
videobuf_core tveeprom evdev i2c_core pcspkr thermal_sys [last unloaded: 
scsi_wait_scan]
Aug  8 20:16:17 security kernel: [  721.555838] 
Aug  8 20:16:17 security kernel: [  721.555841] Pid: 374, comm: kmemleak Not 
tainted 2.6.35-rc6+xen-2.6.35-rc6-xen-isoc-20100808-l3-mutex-dma-ed+ #7 /
Aug  8 20:16:17 security kernel: [  721.555847] RIP: e030:[<ffffffff81006318>]  
[<ffffffff81006318>] xen_restore_fl_direct+0x18/0x1b
Aug  8 20:16:17 security kernel: [  721.555858] RSP: e02b:ffff88001d8abe40  
EFLAGS: 00000246
Aug  8 20:16:17 security kernel: [  721.555861] RAX: 0000000000000000 RBX: 
0000000000000000 RCX: ffff88001f7626d0
Aug  8 20:16:17 security kernel: [  721.555865] RDX: 0000000000000000 RSI: 
0000000000000200 RDI: 0000000000000200
Aug  8 20:16:17 security kernel: [  721.555869] RBP: 0000000000000001 R08: 
fffc000000000000 R09: ffff88001d8abdb0
Aug  8 20:16:17 security kernel: [  721.555873] R10: 000000000000000c R11: 
ffffea00002fdef8 R12: 0000000000000200
Aug  8 20:16:17 security kernel: [  721.555877] R13: 0000000000000000 R14: 
ffffea00002fdf01 R15: 0000000000000001
Aug  8 20:16:17 security kernel: [  721.555886] FS:  00007fc794dfc910(0000) 
GS:ffff880002ced000(0000) knlGS:0000000000000000
Aug  8 20:16:17 security kernel: [  721.555891] CS:  e033 DS: 0000 ES: 0000 
CR0: 000000008005003b
Aug  8 20:16:17 security kernel: [  721.555894] CR2: 0000000001840078 CR3: 
000000001e40b000 CR4: 0000000000000660
Aug  8 20:16:17 security kernel: [  721.559780] DR0: 0000000000000000 DR1: 
0000000000000000 DR2: 0000000000000000
Aug  8 20:16:17 security kernel: [  721.559780] DR3: 0000000000000000 DR6: 
00000000ffff0ff0 DR7: 0000000000000400
Aug  8 20:16:17 security kernel: [  721.559780] Process kmemleak (pid: 374, 
threadinfo ffff88001d8aa000, task ffff88001fd918d0)
Aug  8 20:16:17 security kernel: [  721.559780] Stack:
Aug  8 20:16:17 security kernel: [  721.559780]  ffffffff8142d77e 
ffffffff810c638e 0000000000000000 ffff8800126e0260
Aug  8 20:16:17 security kernel: [  721.559780] <0> ffffea00002fdf00 
ffffffff810c6967 ffff8800149b02b0 ffffea00002fded0
Aug  8 20:16:17 security kernel: [  721.559780] <0> 000000000000dad6 
ffffea00002fdf08 0000000000020000 0000000000000000
Aug  8 20:16:17 security kernel: [  721.559780] Call Trace:
Aug  8 20:16:17 security kernel: [  721.559780]  [<ffffffff8142d77e>] ? 
_raw_read_unlock_irqrestore+0xd/0xe
Aug  8 20:16:17 security kernel: [  721.559780]  [<ffffffff810c638e>] ? 
find_and_get_object+0x4a/0x75
Aug  8 20:16:17 security kernel: [  721.559780]  [<ffffffff810c6967>] ? 
scan_block+0x4a/0xf7
Aug  8 20:16:17 security kernel: [  721.559780]  [<ffffffff810c6ce9>] ? 
kmemleak_scan+0x1a2/0x3e9
Aug  8 20:16:17 security kernel: [  721.559780]  [<ffffffff810c737c>] ? 
kmemleak_scan_thread+0x0/0x9b
Aug  8 20:16:17 security kernel: [  721.559780]  [<ffffffff810c737c>] ? 
kmemleak_scan_thread+0x0/0x9b
Aug  8 20:16:17 security kernel: [  721.559780]  [<ffffffff810c73d5>] ? 
kmemleak_scan_thread+0x59/0x9b
Aug  8 20:16:17 security kernel: [  721.559780]  [<ffffffff81054051>] ? 
kthread+0x79/0x81
Aug  8 20:16:17 security kernel: [  721.559780]  [<ffffffff810094e4>] ? 
kernel_thread_helper+0x4/0x10
Aug  8 20:16:17 security kernel: [  721.559780]  [<ffffffff810088e3>] ? 
int_ret_from_sys_call+0x7/0x1b
Aug  8 20:16:17 security kernel: [  721.559780]  [<ffffffff8142dadd>] ? 
retint_restore_args+0x5/0x6
Aug  8 20:16:17 security kernel: [  721.559780]  [<ffffffff810094e0>] ? 
kernel_thread_helper+0x0/0x10
Aug  8 20:16:17 security kernel: [  721.559780] Code: 44 00 00 65 f6 04 25 21 
b0 00 00 ff 0f 94 c4 00 e4 c3 90 66 f7 c7 00 02 65 0f 94 04 25 21 b0 00 00 65 
66 83 3c 25 20 b0 00 00 01 <74> 05 e8 01 00 00 00 c3 50 51 52 56 57 41 50 41 51 
41 52 41 53 
Aug  8 20:16:17 security kernel: [  721.559780] Call Trace:
Aug  8 20:16:17 security kernel: [  721.559780]  [<ffffffff8142d77e>] ? 
_raw_read_unlock_irqrestore+0xd/0xe
Aug  8 20:16:17 security kernel: [  721.559780]  [<ffffffff810c638e>] ? 
find_and_get_object+0x4a/0x75
Aug  8 20:16:17 security kernel: [  721.559780]  [<ffffffff810c6967>] ? 
scan_block+0x4a/0xf7
Aug  8 20:16:17 security kernel: [  721.559780]  [<ffffffff810c6ce9>] ? 
kmemleak_scan+0x1a2/0x3e9
Aug  8 20:16:17 security kernel: [  721.559780]  [<ffffffff810c737c>] ? 
kmemleak_scan_thread+0x0/0x9b
Aug  8 20:16:17 security kernel: [  721.559780]  [<ffffffff810c737c>] ? 
kmemleak_scan_thread+0x0/0x9b
Aug  8 20:16:17 security kernel: [  721.559780]  [<ffffffff810c73d5>] ? 
kmemleak_scan_thread+0x59/0x9b
Aug  8 20:16:17 security kernel: [  721.559780]  [<ffffffff81054051>] ? 
kthread+0x79/0x81
Aug  8 20:16:17 security kernel: [  721.559780]  [<ffffffff810094e4>] ? 
kernel_thread_helper+0x4/0x10
Aug  8 20:16:17 security kernel: [  721.559780]  [<ffffffff810088e3>] ? 
int_ret_from_sys_call+0x7/0x1b
Aug  8 20:16:17 security kernel: [  721.559780]  [<ffffffff8142dadd>] ? 
retint_restore_args+0x5/0x6
Aug  8 20:16:17 security kernel: [  721.559780]  [<ffffffff810094e0>] ? 
kernel_thread_helper+0x0/0x10
Aug  8 20:16:19 security kernel: [  724.187104] kmemleak: 5 new suspected 
memory leaks (see /sys/kernel/debug/kmemleak)
Aug  8 20:16:46 security motion: [0] Thread 1 - Watchdog timeout, trying to do 
a graceful restart
Aug  8 20:17:01 security /USR/SBIN/CRON[1865]: (root) CMD (   cd / && run-parts 
--report /etc/cron.hourly)
Aug  8 20:17:46 security motion: [0] Thread 1 - Watchdog timeout, did NOT 
restart graceful,killing it!
Aug  8 20:17:46 security motion: [0] Calling vid_close() from motion_cleanup
Aug  8 20:17:46 security motion: [0] Closing video device /dev/kworld
Aug  8 20:20:17 security kernel: [  961.780121] INFO: task motion:1257 blocked 
for more than 120 seconds.
Aug  8 20:20:17 security kernel: [  961.780155] "echo 0 > 
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
Aug  8 20:20:17 security kernel: [  961.780177] motion        D 
ffff88001ef60bc0     0  1257      1 0x00000000
Aug  8 20:20:17 security kernel: [  961.780207]  ffff88001fd155d0 
0000000000000282 ffffffff81005cc5 00000000000145c0
Aug  8 20:20:17 security kernel: [  961.780243]  ffff88001e41dfd8 
ffff88001e41dfd8 ffff88001ef60930 00000000000145c0
Aug  8 20:20:17 security kernel: [  961.780278]  00000000000145c0 
00000000000145c0 ffff88001ef60930 0000000000000000
Aug  8 20:20:17 security kernel: [  961.780313] Call Trace:
Aug  8 20:20:17 security kernel: [  961.780337]  [<ffffffff81005cc5>] ? 
xen_force_evtchn_callback+0x9/0xa
Aug  8 20:20:17 security kernel: [  961.780365]  [<ffffffffa002eaa6>] ? 
video_ioctl2+0x0/0x32e [videodev]
Aug  8 20:20:17 security kernel: [  961.780388]  [<ffffffff8142c544>] ? 
__mutex_lock_slowpath+0x12f/0x22c
Aug  8 20:20:17 security kernel: [  961.780409]  [<ffffffff8142c64a>] ? 
mutex_lock+0x9/0x1e
Aug  8 20:20:17 security kernel: [  961.780430]  [<ffffffffa0017e58>] ? 
videobuf_streamoff+0x13/0x35 [videobuf_core]
Aug  8 20:20:17 security kernel: [  961.780454]  [<ffffffff81005cc5>] ? 
xen_force_evtchn_callback+0x9/0xa
Aug  8 20:20:17 security kernel: [  961.780478]  [<ffffffffa003d573>] ? 
vidioc_streamoff+0x7e/0xb5 [em28xx]
Aug  8 20:20:17 security kernel: [  961.780500]  [<ffffffffa002c5fe>] ? 
__video_do_ioctl+0x181f/0x3cc7 [videodev]
Aug  8 20:20:17 security kernel: [  961.780523]  [<ffffffff8100631f>] ? 
xen_restore_fl_direct_end+0x0/0x1
Aug  8 20:20:17 security kernel: [  961.780544]  [<ffffffff8142d714>] ? 
_raw_spin_unlock_irqrestore+0xc/0xd
Aug  8 20:20:17 security kernel: [  961.780564]  [<ffffffff813941dd>] ? 
sock_def_readable+0x3b/0x5d
Aug  8 20:20:17 security kernel: [  961.780585]  [<ffffffff814043a6>] ? 
unix_dgram_sendmsg+0x428/0x4b2
Aug  8 20:20:17 security kernel: [  961.780606]  [<ffffffff810058fa>] ? 
xen_set_pte_at+0x196/0x1b6
Aug  8 20:20:17 security kernel: [  961.780625]  [<ffffffff810036bd>] ? 
__raw_callee_save_xen_make_pte+0x11/0x1e
Aug  8 20:20:17 security kernel: [  961.780648]  [<ffffffff8139115e>] ? 
sock_sendmsg+0xd1/0xec
Aug  8 20:20:17 security kernel: [  961.780669]  [<ffffffff810b0b00>] ? 
__do_fault+0x40f/0x44a
Aug  8 20:20:17 security kernel: [  961.780689]  [<ffffffff81005cc5>] ? 
xen_force_evtchn_callback+0x9/0xa
Aug  8 20:20:17 security kernel: [  961.780709]  [<ffffffff81006332>] ? 
check_events+0x12/0x20
Aug  8 20:20:17 security kernel: [  961.780730]  [<ffffffffa002ed38>] ? 
video_ioctl2+0x292/0x32e [videodev]
Aug  8 20:20:17 security kernel: [  961.780750]  [<ffffffff81002616>] ? 
xen_write_msr_safe+0x5d/0x79
Aug  8 20:20:17 security kernel: [  961.780770]  [<ffffffff81007337>] ? 
__switch_to+0x1b3/0x2a4
Aug  8 20:20:17 security kernel: [  961.780790]  [<ffffffff8100622a>] ? 
xen_sched_clock+0xf/0x8c
Aug  8 20:20:17 security kernel: [  961.780810]  [<ffffffff81005cc5>] ? 
xen_force_evtchn_callback+0x9/0xa
Aug  8 20:20:17 security kernel: [  961.780830]  [<ffffffff81006332>] ? 
check_events+0x12/0x20
Aug  8 20:20:17 security kernel: [  961.780850]  [<ffffffffa002a10b>] ? 
v4l2_ioctl+0x38/0x3a [videodev]
Aug  8 20:20:17 security kernel: [  961.780870]  [<ffffffff810d54be>] ? 
vfs_ioctl+0x69/0x92
Aug  8 20:20:17 security kernel: [  961.780889]  [<ffffffff810d596e>] ? 
do_vfs_ioctl+0x411/0x43c
Aug  8 20:20:17 security kernel: [  961.780909]  [<ffffffff810c96b4>] ? 
vfs_write+0x134/0x169
Aug  8 20:20:17 security kernel: [  961.780928]  [<ffffffff810d59ea>] ? 
sys_ioctl+0x51/0x70
Aug  8 20:20:17 security kernel: [  961.780947]  [<ffffffff810086c2>] ? 
system_call_fastpath+0x16/0x1b
Aug  8 20:22:17 security kernel: [ 1081.780140] INFO: task motion:1257 blocked 
for more than 120 seconds.
Aug  8 20:22:17 security kernel: [ 1081.780172] "echo 0 > 
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
Aug  8 20:22:17 security kernel: [ 1081.780194] motion        D 
ffff88001ef60bc0     0  1257      1 0x00000000
Aug  8 20:22:17 security kernel: [ 1081.780224]  ffff88001fd155d0 
0000000000000282 ffffffff81005cc5 00000000000145c0
Aug  8 20:22:17 security kernel: [ 1081.780261]  ffff88001e41dfd8 
ffff88001e41dfd8 ffff88001ef60930 00000000000145c0
Aug  8 20:22:17 security kernel: [ 1081.780295]  00000000000145c0 
00000000000145c0 ffff88001ef60930 0000000000000000
Aug  8 20:22:17 security kernel: [ 1081.780330] Call Trace:
Aug  8 20:22:17 security kernel: [ 1081.780355]  [<ffffffff81005cc5>] ? 
xen_force_evtchn_callback+0x9/0xa
Aug  8 20:22:17 security kernel: [ 1081.780382]  [<ffffffffa002eaa6>] ? 
video_ioctl2+0x0/0x32e [videodev]
Aug  8 20:22:17 security kernel: [ 1081.780405]  [<ffffffff8142c544>] ? 
__mutex_lock_slowpath+0x12f/0x22c
Aug  8 20:22:17 security kernel: [ 1081.780426]  [<ffffffff8142c64a>] ? 
mutex_lock+0x9/0x1e
Aug  8 20:22:17 security kernel: [ 1081.780447]  [<ffffffffa0017e58>] ? 
videobuf_streamoff+0x13/0x35 [videobuf_core]
Aug  8 20:22:17 security kernel: [ 1081.780471]  [<ffffffff81005cc5>] ? 
xen_force_evtchn_callback+0x9/0xa
Aug  8 20:22:17 security kernel: [ 1081.780495]  [<ffffffffa003d573>] ? 
vidioc_streamoff+0x7e/0xb5 [em28xx]
Aug  8 20:22:17 security kernel: [ 1081.780517]  [<ffffffffa002c5fe>] ? 
__video_do_ioctl+0x181f/0x3cc7 [videodev]
Aug  8 20:22:17 security kernel: [ 1081.780540]  [<ffffffff8100631f>] ? 
xen_restore_fl_direct_end+0x0/0x1
Aug  8 20:22:17 security kernel: [ 1081.780561]  [<ffffffff8142d714>] ? 
_raw_spin_unlock_irqrestore+0xc/0xd
Aug  8 20:22:17 security kernel: [ 1081.780581]  [<ffffffff813941dd>] ? 
sock_def_readable+0x3b/0x5d
Aug  8 20:22:17 security kernel: [ 1081.780602]  [<ffffffff814043a6>] ? 
unix_dgram_sendmsg+0x428/0x4b2
Aug  8 20:22:17 security kernel: [ 1081.780622]  [<ffffffff810058fa>] ? 
xen_set_pte_at+0x196/0x1b6
Aug  8 20:22:17 security kernel: [ 1081.780642]  [<ffffffff810036bd>] ? 
__raw_callee_save_xen_make_pte+0x11/0x1e
Aug  8 20:22:17 security kernel: [ 1081.780666]  [<ffffffff8139115e>] ? 
sock_sendmsg+0xd1/0xec
Aug  8 20:22:17 security kernel: [ 1081.780686]  [<ffffffff810b0b00>] ? 
__do_fault+0x40f/0x44a
Aug  8 20:22:17 security kernel: [ 1081.780706]  [<ffffffff81005cc5>] ? 
xen_force_evtchn_callback+0x9/0xa
Aug  8 20:22:17 security kernel: [ 1081.780726]  [<ffffffff81006332>] ? 
check_events+0x12/0x20
Aug  8 20:22:17 security kernel: [ 1081.780747]  [<ffffffffa002ed38>] ? 
video_ioctl2+0x292/0x32e [videodev]
Aug  8 20:22:17 security kernel: [ 1081.780767]  [<ffffffff81002616>] ? 
xen_write_msr_safe+0x5d/0x79
Aug  8 20:22:17 security kernel: [ 1081.780787]  [<ffffffff81007337>] ? 
__switch_to+0x1b3/0x2a4
Aug  8 20:22:17 security kernel: [ 1081.780806]  [<ffffffff8100622a>] ? 
xen_sched_clock+0xf/0x8c
Aug  8 20:22:17 security kernel: [ 1081.780826]  [<ffffffff81005cc5>] ? 
xen_force_evtchn_callback+0x9/0xa
Aug  8 20:22:17 security kernel: [ 1081.780847]  [<ffffffff81006332>] ? 
check_events+0x12/0x20
Aug  8 20:22:17 security kernel: [ 1081.780866]  [<ffffffffa002a10b>] ? 
v4l2_ioctl+0x38/0x3a [videodev]
Aug  8 20:22:17 security kernel: [ 1081.780886]  [<ffffffff810d54be>] ? 
vfs_ioctl+0x69/0x92
Aug  8 20:22:17 security kernel: [ 1081.780905]  [<ffffffff810d596e>] ? 
do_vfs_ioctl+0x411/0x43c
Aug  8 20:22:17 security kernel: [ 1081.780925]  [<ffffffff810c96b4>] ? 
vfs_write+0x134/0x169
Aug  8 20:22:17 security kernel: [ 1081.780943]  [<ffffffff810d59ea>] ? 
sys_ioctl+0x51/0x70
Aug  8 20:22:17 security kernel: [ 1081.780961]  [<ffffffff810086c2>] ? 
system_call_fastpath+0x16/0x1b






Friday, August 6, 2010, 5:17:43 PM, you wrote:

> On Fri, Aug 06, 2010 at 11:21:11AM +0200, Sander Eikelenboom wrote:
>> Hi Konrad,
>> 
>> Hmm it seems that 2.6.33 tree does seem to work for 1 VM with a 
>> videograbber, but doesn't for the VM which seem to cause the freeze.
>> It does spit out some stacktraces after a while of not functioning, with 
>> since is OOM i will be something else caused by the fall out and not 
>> anywhere near the root cause.
>> Although this at least didn't freeze the complete system :-)
>> I will try some more configurations to see if i can find a pattern somehow 
>> ...
>> 
>> --
>> Sander
>> 
>> [ 1269.032133] submit of urb 0 failed (error=-90)
>> [ 1274.153341] motion: page allocation failure. order:6, mode:0xd4

> That is a 256kB request for memery.
>> [ 1274.153375] Pid: 1884, comm: motion Not tainted 2.6.33 #5
>> [ 1274.153391] Call Trace:
>> [ 1274.153416]  [<ffffffff810e4665>] __alloc_pages_nodemask+0x5b2/0x62b
>> [ 1274.153440]  [<ffffffff810338b9>] ? xen_force_evtchn_callback+0xd/0xf
>> [ 1274.153461]  [<ffffffff810e46f5>] __get_free_pages+0x17/0x5f
>> [ 1274.153483]  [<ffffffff8128042e>] xen_swiotlb_alloc_coherent+0x3c/0xe2
>> [ 1274.153507]  [<ffffffff81410931>] hcd_buffer_alloc+0xfa/0x11f
>> [ 1274.153527]  [<ffffffff81403e0c>] usb_buffer_alloc+0x17/0x1d
>> [ 1274.153562]  [<ffffffffa003f39e>] em28xx_init_isoc+0x16a/0x32b [em28xx]
>> [ 1274.153585]  [<ffffffff815ec0b9>] ? __down_read+0x47/0xed
>> [ 1274.153613]  [<ffffffffa003a4ac>] buffer_prepare+0xd7/0x10d [em28xx]
>> [ 1274.153639]  [<ffffffffa0016dac>] videobuf_qbuf+0x308/0x3f4 
>> [videobuf_core]
>> [ 1274.153667]  [<ffffffffa0039cb3>] vidioc_qbuf+0x35/0x3a [em28xx]
>> [ 1274.153697]  [<ffffffffa0028229>] __video_do_ioctl+0x11ab/0x373b 
>> [videodev]
>> [ 1274.153720]  [<ffffffff814b51cd>] ? sock_def_readable+0x54/0x5f
>> [ 1274.153743]  [<ffffffff81541f65>] ? unix_dgram_sendmsg+0x3f1/0x43e
>> [ 1274.153764]  [<ffffffff810313b5>] ? 
>> __raw_callee_save_xen_pud_val+0x11/0x1e
>> [ 1274.153793]  [<ffffffffa0039c7e>] ? vidioc_qbuf+0x0/0x3a [em28xx]
>> [ 1274.153814]  [<ffffffff814b208b>] ? sock_sendmsg+0xa3/0xbc
>> [ 1274.153837]  [<ffffffff8123349b>] ? avc_has_perm+0x4e/0x60
>> [ 1274.153855]  [<ffffffff810338b9>] ? xen_force_evtchn_callback+0xd/0xf
>> [ 1274.153880]  [<ffffffffa002aab1>] video_ioctl2+0x2f8/0x3af [videodev]
>> [ 1274.153901]  [<ffffffff810357df>] ? __switch_to+0x265/0x277
>> [ 1274.153924]  [<ffffffffa0026122>] v4l2_ioctl+0x38/0x3a [videodev]
>> [ 1274.153944]  [<ffffffff8111ec90>] vfs_ioctl+0x72/0x9e
>> [ 1274.153961]  [<ffffffff8111f1d7>] do_vfs_ioctl+0x4a0/0x4e1
>> [ 1274.153980]  [<ffffffff8111f26d>] sys_ioctl+0x55/0x77
>> [ 1274.154000]  [<ffffffff81112e6a>] ? sys_write+0x60/0x70
>> [ 1274.154009]  [<ffffffff81036cc2>] system_call_fastpath+0x16/0x1b
>> [ 1274.154126] Mem-Info:
>> [ 1274.154138] DMA per-cpu:
>> [ 1274.154151] CPU    0: hi:    0, btch:   1 usd:   0
>> [ 1274.154165] CPU    1: hi:    0, btch:   1 usd:   0
>> [ 1274.154180] DMA32 per-cpu:
>> [ 1274.154202] CPU    0: hi:  186, btch:  31 usd:   0
>> [ 1274.154220] CPU    1: hi:  186, btch:  31 usd:  78
>> [ 1274.154241] active_anon:248 inactive_anon:326 isolated_anon:0
>> [ 1274.154244]  active_file:132 inactive_file:105 isolated_file:41
>> [ 1274.154247]  unevictable:0 dirty:0 writeback:19 unstable:0
>> [ 1274.154250]  free:1309 slab_reclaimable:642 slab_unreclaimable:3111
>> [ 1274.154254]  mapped:100846 shmem:4 pagetables:1187 bounce:0
>> [ 1274.154313] DMA free:2036kB min:80kB low:100kB high:120kB active_anon:0kB 
>> inactive_anon:24kB active_file:20kB inactive_file:0kB unevictable:0kB 
>> isolated(anon):0kB isolated(file):0kB present:14752kB mlocked:0kB dirty:0kB 
>> writeback:0kB mapped:12804kB shmem:0kB slab_reclaimable:16kB 
>> slab_unreclaimable:40kB kernel_stack:0kB pagetables:24kB unstable:0kB 
>> bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
>> [ 1274.154375] lowmem_reserve[]: 0 489 489 489
>> [ 1274.154415] DMA32 free:3200kB min:2788kB low:3484kB high:4180kB 
>> active_anon:992kB inactive_anon:1280kB active_file:508kB inactive_file:420kB 
>> unevictable:0kB isolated(anon):0kB isolated(file):164kB present:500960kB 
>> mlocked:0kB dirty:0kB writeback:76kB mapped:390580kB shmem:16kB 
>> slab_reclaimable:2552kB slab_unreclaimable:12404kB kernel_stack:592kB 
>> pagetables:4724kB unstable:0kB bounce:0kB writeback_tmp:0kB 
>> pages_scanned:160 all_unreclaimable? no
>> [ 1274.154481] lowmem_reserve[]: 0 0 0 0
>> [ 1274.154508] DMA: 7*4kB 1*8kB 1*16kB 0*32kB 1*64kB 1*128kB 1*256kB 1*512kB 
>> 1*1024kB 0*2048kB 0*4096kB = 2036kB
>> [ 1274.154571] DMA32: 409*4kB 33*8kB 2*16kB 0*32kB 0*64kB 0*128kB 1*256kB 
>> 0*512kB 1*1024kB 0*2048kB 0*4096kB = 3212kB
>> [ 1274.154634] 429 total pagecache pages
>> [ 1274.154646] 161 pages in swap cache
>> [ 1274.154658] Swap cache stats: add 344422, delete 344260, find 99167/143153
>> [ 1274.154673] Free swap  = 476756kB
>> [ 1274.154684] Total swap = 524280kB
>> [ 1274.160880] 131072 pages RAM
>> [ 1274.160902] 21934 pages reserved
>> [ 1274.160914] 101195 pages shared
>> [ 1274.160925] 6309 pages non-shared
>> [ 1274.160963] unable to allocate 185088 bytes for transfer buffer 4

> Though here it says it is 185 kbytes. Hmm.. You got 3MB in DMA32 and 2MB
> in DMA so that should be enough.

> I am not that familiar with the VM, so the instinctive thing I can think
> of is to raise the amount of memory your guest has from the 512MB to
> 768MB. Does 'proc/meminfo' when this happens show you an excedingly
> small amount of MemFree?

>> [ 1287.634682] motion invoked oom-killer: gfp_mask=0x201da, order=0, 
>> oom_adj=0
>> [ 1287.634719] motion cpuset=/ mems_allowed=0
>> 
>> 
>> 
>> 
>> Thursday, August 5, 2010, 4:52:14 PM, you wrote:
>> 
>> > On Thu, Aug 05, 2010 at 11:48:44AM +0200, Sander Eikelenboom wrote:
>> >> Hi Konrad/Jeremy,
>> >> 
>> >> I have tested the last 2 days with the vm's with passthroughed devices 
>> >> shutdown, and no freeze so far.
>> >> I'm running now with one of the vm's that runs an old 2.6.33 kernel from 
>> >> an old tree from Konrad together with some hacked up patches for 
>> >> xhci/usb3 support.
>> >> That seems to be running fine for some time now (although not a full 2 
>> >> days yet).
>> >> 
>> >> So my other vm seems to cause the freeze.
>> >> 
>> >> - This one uses the devel/merge.2.6.35-rc6.t2 as domU kernel, i think i 
>> >> should try an older version of pci-front/xen-swiotlb perhaps.
>> >> - It has both a usb2 and usb3 controller passed through, but the xhci 
>> >> module has much changed since the hacked up patches from the kernel in de 
>> >> working domU vm
>> >> - Most probably the drivers for the videograbbers will have changed
>> >> 
>> >> So i suspect:
>> >>    - newer pci-front / xen-swiotlb
>> >>    - xhci/usb3 driver
>> >>    - drivers videograbber
>> >> 
>> >> Most probable would be a roque dma transfer that can't be catched by xen 
>> >> / pciback I guess, and therefore would be hard to debug ?
>> 
>> > The SWIOTLB "brains" by themselves haven't changed since the
>> > uhh...2.6.33. The code internals that just got Ack-ed upstream looks quite
>> > similar to the one that Jeremy carries in xen/stable-2.6.32.x. The
>> > outside plumbing parts are the ones that changed.
>> 
>> > The fixes in the pci-front, well, most of those are "burocractic" in
>> > nature - set the ownership to this, make hotplug work, etc. The big
>> > fixes were the MSI/MSI-X ones but those were big news a couple of months
>> > ago (and I think that was when 2.6.34 came out).
>> 
>> > The videograbber (vl4) stack trace you sent to me some time ago looked
>> > liked a mutex was held for a very very long time... which I wonder if
>> > that is the cmpxch compiler bug that has hit some folks. Are you using
>> > Debian?
>> 
>> > But we can do something easy. I can rebase my 2.6.33 kernel with the
>> > latest Xen-SWIOTLB/SWIOTLB engine + Xen PCI front, and we can eliminate the
>> > SWIOTLB/PCIfront being at fault here.. Let me do that if your  2.6.33
>> > VM guest is running fine for the last two days.
>> 
>> 
>> 
>> 
>> -- 
>> Best regards,
>>  Sander                            mailto:linux@xxxxxxxxxxxxxx



-- 
Best regards,
 Sander                            mailto:linux@xxxxxxxxxxxxxx

Attachment: interrupts.txt
Description: Text document

Attachment: meminfo.txt
Description: Text document

Attachment: slabinfo.txt
Description: Text document

Attachment: vmallacinfo.txt
Description: Text document

Attachment: vmstat.txt
Description: Text document

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>