[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [GIT PULL] (xen) stable/for-jens-3.10 xenwatch: page allocation failure: order:7, mode:0x10c0d0
On 25/04/13 14:32, Sander Eikelenboom wrote: > > Thursday, April 25, 2013, 10:43:33 AM, you wrote: > >> On 25/04/13 10:35, Roger Pau Monné wrote: >>> On 24/04/13 20:16, Sander Eikelenboom wrote: >>>> Friday, April 19, 2013, 4:44:01 PM, you wrote: >>>> >>>>> Hey Jens, >>>> >>>>> Please in your spare time (if there is such a thing at a conference) >>>>> pull this branch: >>>> >>>>> git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git >>>>> stable/for-jens-3.10 >>>> >>>>> for your v3.10 branch. Sorry for being so late with this. >>>> >>>> <big snip></big snip> >>>> >>>>> Anyhow, please pull and if possible include the nice overview I typed up >>>>> in the >>>>> merge commit. >>>> >>>>> Documentation/ABI/stable/sysfs-bus-xen-backend | 18 + >>>>> drivers/block/xen-blkback/blkback.c | 843 >>>>> ++++++++++++++++--------- >>>>> drivers/block/xen-blkback/common.h | 145 ++++- >>>>> drivers/block/xen-blkback/xenbus.c | 38 ++ >>>>> drivers/block/xen-blkfront.c | 490 +++++++++++--- >>>>> include/xen/interface/io/blkif.h | 53 ++ >>>>> 6 files changed, 1188 insertions(+), 399 deletions(-) >>>> >>>>> Roger Pau Monne (7): >>>>> xen-blkback: print stats about persistent grants >>>>> xen-blkback: use balloon pages for all mappings >>>>> xen-blkback: implement LRU mechanism for persistent grants >>>>> xen-blkback: move pending handles list from blkbk to pending_req >>>>> xen-blkback: make the queue of free requests per backend >>>>> xen-blkback: expand map/unmap functions >>>>> xen-block: implement indirect descriptors >>>> >>>> >>>> Hi Konrad / Roger, >>>> >>>> I tried this pull on top of latest Linus latest linux-3.9 tree, but >>>> although it seems to boot and work fine at first, i seem to get trouble >>>> after running for about a day. >>>> Without this pull it runs fine for several days. >>>> >>>> Trying to start a new guest I ended up with the splat below. In the output >>>> of xl-dmesg i seem to see more of these than before: >>>> (XEN) [2013-04-24 14:37:40] grant_table.c:1250:d1 Expanding dom (1) grant >>>> table from (9) to (10) frames >>> >>> Hello Sander, >>> >>> Thanks for the report, it is expected to see more messages regarding >>> grant table expansion with this patch, since we are using up to 1056 >>> persistent grants for each backend. Could you try lowering down the >>> maximum number of persistent grants to see if that prevents running out >>> of memory: >>> >>> # echo 384 > /sys/module/xen_blkback/parameters/max_persistent_grants > >> And the number of free pages keep in blkback cache: > > # echo 256 >> /sys/module/xen_blkback/parameters/max_buffer_pages > > With both set .. it still bails out after sometime when trying to start a new > guest. OK, will work on a patch to split memory allocation instead of doing it all in a big chunk. > > > [ 9871.923198] Pid: 54, comm: xenwatch Not tainted 3.9.0-rc8-20130424-jens+ #1 > [ 9871.934278] Call Trace: > [ 9871.945146] [<ffffffff81100c51>] warn_alloc_failed+0xf1/0x140 > [ 9871.956094] [<ffffffff811021f1>] ? > __alloc_pages_direct_compact+0x211/0x230 > [ 9871.967048] [<ffffffff811028af>] __alloc_pages_nodemask+0x69f/0x960 > [ 9871.978092] [<ffffffff8113a161>] alloc_pages_current+0xb1/0x160 > [ 9871.989065] [<ffffffff81100679>] __get_free_pages+0x9/0x40 > [ 9871.999999] [<ffffffff81142af4>] __kmalloc+0x134/0x160 > [ 9872.010845] [<ffffffff815832d0>] xen_blkbk_probe+0x170/0x2f0 > [ 9872.021667] [<ffffffff81474ce7>] xenbus_dev_probe+0x77/0x130 > [ 9872.032542] [<ffffffff8156a390>] ? __driver_attach+0xa0/0xa0 > [ 9872.043453] [<ffffffff8156a151>] driver_probe_device+0x81/0x220 > [ 9872.054115] [<ffffffff8198198c>] ? klist_next+0x8c/0x110 > [ 9872.064454] [<ffffffff8156a390>] ? __driver_attach+0xa0/0xa0 > [ 9872.074610] [<ffffffff8156a3db>] __device_attach+0x4b/0x50 > [ 9872.084541] [<ffffffff815684e8>] bus_for_each_drv+0x68/0x90 > [ 9872.094282] [<ffffffff8156a0c9>] device_attach+0x89/0x90 > [ 9872.103751] [<ffffffff81569258>] bus_probe_device+0xa8/0xd0 > [ 9872.113158] [<ffffffff81567c80>] device_add+0x650/0x720 > [ 9872.122379] [<ffffffff81573103>] ? device_pm_sleep_init+0x43/0x70 > [ 9872.131304] [<ffffffff81567d69>] device_register+0x19/0x20 > [ 9872.139948] [<ffffffff8147495b>] xenbus_probe_node+0x14b/0x160 > [ 9872.148414] [<ffffffff815685b4>] ? bus_for_each_dev+0xa4/0xb0 > [ 9872.156603] [<ffffffff81474b2c>] xenbus_dev_changed+0x1bc/0x1c0 > [ 9872.164631] [<ffffffff810b67f7>] ? lock_release+0x117/0x260 > [ 9872.172551] [<ffffffff81474f66>] backend_changed+0x16/0x20 > [ 9872.180427] [<ffffffff81472f5e>] xenwatch_thread+0x4e/0x150 > [ 9872.188238] [<ffffffff8108abb0>] ? wake_up_bit+0x40/0x40 > [ 9872.196032] [<ffffffff81472f10>] ? xs_watch+0x60/0x60 > [ 9872.203841] [<ffffffff8108a546>] kthread+0xd6/0xe0 > [ 9872.211567] [<ffffffff8108a470>] ? __init_kthread_worker+0x70/0x70 > [ 9872.219075] [<ffffffff819979bc>] ret_from_fork+0x7c/0xb0 > [ 9872.226329] [<ffffffff8108a470>] ? __init_kthread_worker+0x70/0x70 > [ 9872.233416] Mem-Info: > [ 9872.241071] Node 0 DMA per-cpu: > [ 9872.248137] CPU 0: hi: 0, btch: 1 usd: 0 > [ 9872.255108] CPU 1: hi: 0, btch: 1 usd: 0 > [ 9872.262090] CPU 2: hi: 0, btch: 1 usd: 0 > [ 9872.269069] CPU 3: hi: 0, btch: 1 usd: 0 > [ 9872.275890] CPU 4: hi: 0, btch: 1 usd: 0 > [ 9872.282629] CPU 5: hi: 0, btch: 1 usd: 0 > [ 9872.289393] Node 0 DMA32 per-cpu: > [ 9872.296163] CPU 0: hi: 186, btch: 31 usd: 53 > [ 9872.302701] CPU 1: hi: 186, btch: 31 usd: 72 > [ 9872.308924] CPU 2: hi: 186, btch: 31 usd: 66 > [ 9872.314937] CPU 3: hi: 186, btch: 31 usd: 30 > [ 9872.320649] CPU 4: hi: 186, btch: 31 usd: 110 > [ 9872.326032] CPU 5: hi: 186, btch: 31 usd: 163 > [ 9872.331185] active_anon:4510 inactive_anon:10674 isolated_anon:0 > [ 9872.331185] active_file:21063 inactive_file:161965 isolated_file:0 > [ 9872.331185] unevictable:519 dirty:127 writeback:0 unstable:0 > [ 9872.331185] free:3448 slab_reclaimable:8061 slab_unreclaimable:10395 > [ 9872.331185] mapped:3916 shmem:321 pagetables:1249 bounce:0 > [ 9872.331185] free_cma:0 > [ 9872.358911] Node 0 DMA free:3836kB min:64kB low:80kB high:96kB > active_anon:0kB inactive_anon:4kB active_file:76kB inactive_file:10748kB > unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15992kB > managed:15908kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB > slab_reclaimable:1004kB slab_unreclaimable:224kB kernel_stack:16kB > pagetables:0kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB > pages_scanned:0 all_unreclaimable? no > [ 9872.374281] lowmem_reserve[]: 0 884 884 884 > [ 9872.379566] Node 0 DMA32 free:9916kB min:3772kB low:4712kB high:5656kB > active_anon:17968kB inactive_anon:42692kB active_file:84192kB > inactive_file:637180kB unevictable:2084kB isolated(anon):0kB > isolated(file):0kB present:1032192kB managed:905896kB mlocked:2084kB > dirty:524kB writeback:0kB mapped:15800kB shmem:1284kB > slab_reclaimable:31240kB slab_unreclaimable:41352kB kernel_stack:2160kB > pagetables:5016kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB > pages_scanned:0 all_unreclaimable? no > [ 9872.402980] lowmem_reserve[]: 0 0 0 0 > [ 9872.409005] Node 0 DMA: 5*4kB (M) 13*8kB (M) 104*16kB (M) 4*32kB (MR) > 2*64kB (R) 0*128kB 1*256kB (R) 1*512kB (R) 1*1024kB (R) 0*2048kB 0*4096kB = > 3836kB > [ 9872.415521] Node 0 DMA32: 1665*4kB (UEMR) 206*8kB (MR) 10*16kB (UMR) > 5*32kB (R) 0*64kB 4*128kB (R) 1*256kB (R) 1*512kB (R) 0*1024kB 0*2048kB > 0*4096kB = 9908kB > [ 9872.428594] 183858 total pagecache pages > [ 9872.435128] 0 pages in swap cache > [ 9872.441781] Swap cache stats: add 7, delete 7, find 3/3 > [ 9872.448409] Free swap = 2097148kB > [ 9872.455042] Total swap = 2097148kB > [ 9872.465414] 262143 pages RAM > [ 9872.471913] 28027 pages reserved > [ 9872.478443] 295127 pages shared > [ 9872.484909] 207989 pages non-shared > [ 9872.491387] vbd vbd-20-768: 12 creating block interface > [ 9872.499259] vbd vbd-20-768: 12 xenbus_dev_probe on backend/vbd/20/768 > [ 9872.506942] vbd: probe of vbd-20-768 failed with error -12 > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |