[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] OOM problems



>  > What do the guests use for storage? (e.g. "blktap2 for VHD files on
> an iscsi mounted ext3 volume")
> 
> Simple sparse .img files on a local ext4 RAID volume, using "file:".

Ah, if you're using loop it may be that you're just filling memory with dirty 
pages. Older kernels certainly did this, not sure about newer ones.

I'd be inclined to use blktap2 in raw file mode, with "aio:".

Ian

 
>  > It might be worth looking at /proc/kernel/slabinfo to see if there's
> anything suspicious.
> 
> I didn't see anything suspicious in there, but I'm not sure what I'm
> looking for.
> 
> Here is the first page of slabtop as it currently stands, if that helps.
> It looks a bit easier to read.
> 
>   Active / Total Objects (% used)    : 274753 / 507903 (54.1%)
>   Active / Total Slabs (% used)      : 27573 / 27582 (100.0%)
>   Active / Total Caches (% used)     : 85 / 160 (53.1%)
>   Active / Total Size (% used)       : 75385.52K / 107127.41K (70.4%)
>   Minimum / Average / Maximum Object : 0.02K / 0.21K / 4096.00K
> 
>    OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME
> 306397 110621  36%    0.10K   8281       37     33124K buffer_head
>   37324  26606  71%    0.54K   5332        7     21328K radix_tree_node
>   25640  25517  99%    0.19K   1282       20      5128K size-192
>   23472  23155  98%    0.08K    489       48      1956K sysfs_dir_cache
>   19964  19186  96%    0.95K   4991        4     19964K ext4_inode_cache
>   17860  13026  72%    0.19K    893       20      3572K dentry
>   14896  13057  87%    0.03K    133      112       532K size-32
>    8316   6171  74%    0.17K    378       22      1512K vm_area_struct
>    8142   5053  62%    0.06K    138       59       552K size-64
>    4320   3389  78%    0.12K    144       30       576K size-128
>    3760   2226  59%    0.19K    188       20       752K filp
>    3456   1875  54%    0.02K     24      144        96K anon_vma
>    3380   3001  88%    1.00K    845        4      3380K size-1024
>    3380   3365  99%    0.76K    676        5      2704K shmem_inode_cache
>    2736   2484  90%    0.50K    342        8      1368K size-512
>    2597   2507  96%    0.07K     49       53       196K Acpi-Operand
>    2100   1095  52%    0.25K    140       15       560K skbuff_head_cache
>    1920    819  42%    0.12K     64       30       256K cred_jar
>    1361   1356  99%    4.00K   1361        1      5444K size-4096
>    1230    628  51%    0.12K     41       30       164K pid
>    1008    907  89%    0.03K      9      112        36K Acpi-Namespace
>     959    496  51%    0.57K    137        7       548K inode_cache
>     891    554  62%    0.81K     99        9       792K signal_cache
>     888    115  12%    0.10K     24       37        96K
> ext4_prealloc_space
>     885    122  13%    0.06K     15       59        60K fs_cache
>     850    642  75%    1.45K    170        5      1360K task_struct
>     820    769  93%    0.19K     41       20       164K bio-0
>     666    550  82%    2.06K    222        3      1776K sighand_cache
>     576    211  36%    0.50K     72        8       288K task_xstate
>     529    379  71%    0.16K     23       23        92K cfq_queue
>     518    472  91%    2.00K    259        2      1036K size-2048
>     506    375  74%    0.16K     22       23        88K cfq_io_context
>     495    353  71%    0.33K     45       11       180K blkdev_requests
>     465    422  90%    0.25K     31       15       124K size-256
>     418    123  29%    0.69K     38       11       304K files_cache
>     360    207  57%    0.69K     72        5       288K sock_inode_cache
>     360    251  69%    0.12K     12       30        48K scsi_sense_cache
>     336    115  34%    0.08K      7       48        28K blkdev_ioc
>     285    236  82%    0.25K     19       15        76K scsi_cmd_cache
> 
> 
>  > BTW: 24 vCPUs in dom0 seems a excessive, especially if you're using
> stubdoms. You may get better performance by dropping that to e.g. 2 or 3.
> 
> I will test that. Do you think it will make a difference in this case?
> 
> -John

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.