[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] hypervisor memory usage



I have actually tracked this down to xen version which centos (could be also what rhel uses):

Version xen.gz-2.6.18-53.1.4.el5.centos.plus gives
BIOS-provided physical RAM map:
Xen: 0000000000000000 - 00000001f00cb000 (usable)
On node 0 totalpages: 2031819
 DMA zone: 2031819 pages, LIFO batch:31

and xen.gz-2.6.18-164.el5 gives:

BIOS-provided physical RAM map:
Xen: 0000000000000000 - 00000001dc9c8000 (usable)
On node 0 totalpages: 1952200
 DMA zone: 1952200 pages, LIFO batch:31

That is 79619 pages difference - slightly over 300MB.

Both give the same info here:
release                : 2.6.18-164.el5xen
version                : #1 SMP Thu Sep 3 04:03:03 EDT 2009
machine                : x86_64
nr_cpus                : 2
nr_nodes               : 1
sockets_per_node       : 1
cores_per_socket       : 2
threads_per_core       : 1
cpu_mhz                : 3013
hw_caps : 178bfbff:ebd3fbff:00000000:00000010:00002001:00000000:0000001f
total_memory           : 8190
free_memory            : 2
node_to_cpu            : node0:0-1
xen_major              : 3
xen_minor              : 1
xen_extra              : .2-164.el5
xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
xen_pagesize           : 4096
platform_params        : virt_start=0xffff800000000000
xen_changeset          : unavailable
cc_compiler            : gcc version 4.1.2 20080704 (Red Hat 4.1.2-46)
cc_compile_by          : mockbuild
cc_compile_domain      : centos.org
cc_compile_date        : Thu Sep  3 03:20:50 EDT 2009
xend_config_format     : 2

(except for xen_extra and cc_compiler which are different).

Now I understand that this could be due to rhel patches, and maybe doesn't relate to official xen builds, but I'd like to know if this issue was known or not before jumping into xen 3.4 - as it won't be direct rpm/yum upgrade path.

Kind Regards,
Vladimir


Keir Fraser wrote:
By default Xen leaves 1/16 of memory free, up to a maximum of 128MB, for
things like allocation of DMA buffers and swiotlb, and other domains,
during/after dom0 boot. So you can see this memory is not actually all used,
but is in Xen free pools, by looking at output of 'xm info' after dom0 has
booted.

If you want dom0 to be given all available memory, add something like
'dom0_mem=64G' to Xen's command line. This overrides the default policy, and
a really large number like 64GB will get clamped down to merely "all memory
available".

 -- Keir


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.