WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [xen-4.0.1-rc5-pre] [pvops 2.6.32.16] Complete freeze wi

To: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Subject: Re: [Xen-devel] [xen-4.0.1-rc5-pre] [pvops 2.6.32.16] Complete freeze within 2 days, no info in serial log
From: Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Date: Fri, 06 Aug 2010 13:44:30 -0700
Cc: Sander Eikelenboom <linux@xxxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Delivery-date: Fri, 06 Aug 2010 13:45:38 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20100806151743.GB4324@xxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <698099271.20100803173057@xxxxxxxxxxxxxx> <20100803154541.GA16122@xxxxxxxxxxxxxxxxxxx> <4C583AFE.7080001@xxxxxxxx> <1048476317.20100805114844@xxxxxxxxxxxxxx> <20100805145214.GC5697@xxxxxxxxxxxxxxxxxxx> <425016991.20100806112111@xxxxxxxxxxxxxx> <20100806151743.GB4324@xxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.7) Gecko/20100720 Fedora/3.1.1-1.fc13 Lightning/1.0b2pre Thunderbird/3.1.1
 On 08/06/2010 08:17 AM, Konrad Rzeszutek Wilk wrote:
On Fri, Aug 06, 2010 at 11:21:11AM +0200, Sander Eikelenboom wrote:
Hi Konrad,

Hmm it seems that 2.6.33 tree does seem to work for 1 VM with a videograbber, 
but doesn't for the VM which seem to cause the freeze.
It does spit out some stacktraces after a while of not functioning, with since 
is OOM i will be something else caused by the fall out and not anywhere near 
the root cause.
Although this at least didn't freeze the complete system :-)
I will try some more configurations to see if i can find a pattern somehow ...

--
Sander

[ 1269.032133] submit of urb 0 failed (error=-90)
[ 1274.153341] motion: page allocation failure. order:6, mode:0xd4
That is a 256kB request for memery.
[ 1274.153375] Pid: 1884, comm: motion Not tainted 2.6.33 #5
[ 1274.153391] Call Trace:
[ 1274.153416]  [<ffffffff810e4665>] __alloc_pages_nodemask+0x5b2/0x62b
[ 1274.153440]  [<ffffffff810338b9>] ? xen_force_evtchn_callback+0xd/0xf
[ 1274.153461]  [<ffffffff810e46f5>] __get_free_pages+0x17/0x5f
[ 1274.153483]  [<ffffffff8128042e>] xen_swiotlb_alloc_coherent+0x3c/0xe2
[ 1274.153507]  [<ffffffff81410931>] hcd_buffer_alloc+0xfa/0x11f
[ 1274.153527]  [<ffffffff81403e0c>] usb_buffer_alloc+0x17/0x1d
[ 1274.153562]  [<ffffffffa003f39e>] em28xx_init_isoc+0x16a/0x32b [em28xx]
[ 1274.153585]  [<ffffffff815ec0b9>] ? __down_read+0x47/0xed
[ 1274.153613]  [<ffffffffa003a4ac>] buffer_prepare+0xd7/0x10d [em28xx]
[ 1274.153639]  [<ffffffffa0016dac>] videobuf_qbuf+0x308/0x3f4 [videobuf_core]
[ 1274.153667]  [<ffffffffa0039cb3>] vidioc_qbuf+0x35/0x3a [em28xx]
[ 1274.153697]  [<ffffffffa0028229>] __video_do_ioctl+0x11ab/0x373b [videodev]
[ 1274.153720]  [<ffffffff814b51cd>] ? sock_def_readable+0x54/0x5f
[ 1274.153743]  [<ffffffff81541f65>] ? unix_dgram_sendmsg+0x3f1/0x43e
[ 1274.153764]  [<ffffffff810313b5>] ? __raw_callee_save_xen_pud_val+0x11/0x1e
[ 1274.153793]  [<ffffffffa0039c7e>] ? vidioc_qbuf+0x0/0x3a [em28xx]
[ 1274.153814]  [<ffffffff814b208b>] ? sock_sendmsg+0xa3/0xbc
[ 1274.153837]  [<ffffffff8123349b>] ? avc_has_perm+0x4e/0x60
[ 1274.153855]  [<ffffffff810338b9>] ? xen_force_evtchn_callback+0xd/0xf
[ 1274.153880]  [<ffffffffa002aab1>] video_ioctl2+0x2f8/0x3af [videodev]
[ 1274.153901]  [<ffffffff810357df>] ? __switch_to+0x265/0x277
[ 1274.153924]  [<ffffffffa0026122>] v4l2_ioctl+0x38/0x3a [videodev]
[ 1274.153944]  [<ffffffff8111ec90>] vfs_ioctl+0x72/0x9e
[ 1274.153961]  [<ffffffff8111f1d7>] do_vfs_ioctl+0x4a0/0x4e1
[ 1274.153980]  [<ffffffff8111f26d>] sys_ioctl+0x55/0x77
[ 1274.154000]  [<ffffffff81112e6a>] ? sys_write+0x60/0x70
[ 1274.154009]  [<ffffffff81036cc2>] system_call_fastpath+0x16/0x1b
[ 1274.154126] Mem-Info:
[ 1274.154138] DMA per-cpu:
[ 1274.154151] CPU    0: hi:    0, btch:   1 usd:   0
[ 1274.154165] CPU    1: hi:    0, btch:   1 usd:   0
[ 1274.154180] DMA32 per-cpu:
[ 1274.154202] CPU    0: hi:  186, btch:  31 usd:   0
[ 1274.154220] CPU    1: hi:  186, btch:  31 usd:  78
[ 1274.154241] active_anon:248 inactive_anon:326 isolated_anon:0
[ 1274.154244]  active_file:132 inactive_file:105 isolated_file:41
[ 1274.154247]  unevictable:0 dirty:0 writeback:19 unstable:0
[ 1274.154250]  free:1309 slab_reclaimable:642 slab_unreclaimable:3111
[ 1274.154254]  mapped:100846 shmem:4 pagetables:1187 bounce:0
[ 1274.154313] DMA free:2036kB min:80kB low:100kB high:120kB active_anon:0kB 
inactive_anon:24kB active_file:20kB inactive_file:0kB unevictable:0kB 
isolated(anon):0kB isolated(file):0kB present:14752kB mlocked:0kB dirty:0kB 
writeback:0kB mapped:12804kB shmem:0kB slab_reclaimable:16kB 
slab_unreclaimable:40kB kernel_stack:0kB pagetables:24kB unstable:0kB 
bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
[ 1274.154375] lowmem_reserve[]: 0 489 489 489
[ 1274.154415] DMA32 free:3200kB min:2788kB low:3484kB high:4180kB 
active_anon:992kB inactive_anon:1280kB active_file:508kB inactive_file:420kB 
unevictable:0kB isolated(anon):0kB isolated(file):164kB present:500960kB 
mlocked:0kB dirty:0kB writeback:76kB mapped:390580kB shmem:16kB 
slab_reclaimable:2552kB slab_unreclaimable:12404kB kernel_stack:592kB 
pagetables:4724kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:160 
all_unreclaimable? no
[ 1274.154481] lowmem_reserve[]: 0 0 0 0
[ 1274.154508] DMA: 7*4kB 1*8kB 1*16kB 0*32kB 1*64kB 1*128kB 1*256kB 1*512kB 
1*1024kB 0*2048kB 0*4096kB = 2036kB
[ 1274.154571] DMA32: 409*4kB 33*8kB 2*16kB 0*32kB 0*64kB 0*128kB 1*256kB 
0*512kB 1*1024kB 0*2048kB 0*4096kB = 3212kB
[ 1274.154634] 429 total pagecache pages
[ 1274.154646] 161 pages in swap cache
[ 1274.154658] Swap cache stats: add 344422, delete 344260, find 99167/143153
[ 1274.154673] Free swap  = 476756kB
[ 1274.154684] Total swap = 524280kB
[ 1274.160880] 131072 pages RAM
[ 1274.160902] 21934 pages reserved
[ 1274.160914] 101195 pages shared
[ 1274.160925] 6309 pages non-shared
[ 1274.160963] unable to allocate 185088 bytes for transfer buffer 4
Though here it says it is 185 kbytes. Hmm.. You got 3MB in DMA32 and 2MB
in DMA so that should be enough.

I am not that familiar with the VM, so the instinctive thing I can think
of is to raise the amount of memory your guest has from the 512MB to
768MB. Does 'proc/meminfo' when this happens show you an excedingly
small amount of MemFree?

Memory allocations are rounded up to the next order, so 185k -> 256k. It's also a contiguous allocation, so it needs to find 64 contiguous pages, which is pretty much impossible in a system which has been running for a while.

    J

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>