[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] real physical e820 table for dom0?



Konrad Rzeszutek Wilk wrote on 2014-03-07:
> On Thu, Mar 06, 2014 at 08:45:35AM +0000, Zhang, Yang Z wrote:
>> Hi all,
>> 
>> I have 8G memory in hardware and I only give 2G memory to dom0. The
> information from /proc/meminfo shows the total memory is about 2G
> which should be correct. But the memory information from sysrq is wrong (see 
> below).
> It shows the total memory is 8G(2015427 pages) which I thought should
> be 2G also. After a little investigation, I found currently dom0 using
> the real physical
> e820 map which I remember should be a pseudo-e820 map provided by Xen
> long time ago.
> 
> I am not sure I see the problem. The
>> 
>> Here is my e820 table: [    0.000000] e820: BIOS-provided physical RAM
>> map: [    0.000000] Xen: [mem 0x0000000000000000-0x000000000009cfff]
>> usable [    0.000000] Xen: [mem 0x000000000009d800-0x00000000000fffff]
>> reserved [    0.000000] Xen: [mem
>> 0x0000000000100000-0x00000000b9775fff] usable [    0.000000] Xen: [mem
>> 0x00000000b9776000-0x00000000b977cfff] ACPI NVS [    0.000000] Xen:
>> [mem 0x00000000b977d000-0x00000000b9bc2fff] usable [    0.000000] Xen:
>> [mem 0x00000000b9bc3000-0x00000000b9fdcfff] reserved [    0.000000]
>> Xen: [mem 0x00000000b9fdd000-0x00000000cc746fff] usable [    0.000000]
>> Xen: [mem 0x00000000cc747000-0x00000000cc94cfff] reserved [   
>> 0.000000] Xen: [mem 0x00000000cc94d000-0x00000000cc961fff] ACPI data [ 
>>   0.000000] Xen: [mem 0x00000000cc962000-0x00000000cce03fff] ACPI NVS [
>>    0.000000] Xen: [mem 0x00000000cce04000-0x00000000cdffefff] reserved
>> [    0.000000] Xen: [mem 0x00000000cdfff000-0x00000000cdffffff] usable
>> [    0.000000] Xen: [mem 0x00000000cf000000-0x00000000df1fffff]
>> reserved [    0.000000] Xen: [mem
>> 0x00000000f8000000-0x00000000fbffffff] reserved [    0.000000] Xen:
>> [mem 0x00000000fec00000-0x00000000fec00fff] reserved [    0.000000]
>> Xen: [mem 0x00000000fed00000-0x00000000fed03fff] reserved [   
>> 0.000000] Xen: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved [  
>>  0.000000] Xen: [mem 0x00000000fee00000-0x00000000feefffff] reserved [ 
>>   0.000000] Xen: [mem 0x00000000ff000000-0x00000000ffffffff] reserved [
>>    0.000000] Xen: [mem 0x0000000100000000-0x000000021fdfffff] usable [ 
>>   0.000000] NX (Execute Disable) protection: active [    0.000000]
>> SMBIOS 2.7 present. [    0.000000] DMI: ASUS All Series/Z87-PRO, BIOS
>> 0801 04/19/2013 [    0.000000] e820: update [mem 0x00000000-0x00000fff]
>> usable ==> reserved [    0.000000] e820: remove [mem
>> 0x000a0000-0x000fffff] usable [    0.000000] e820: last_pfn = 0x21fe00
>> max_arch_pfn = 0x400000000 [    0.000000] e820: last_pfn = 0xce000
>> max_arch_pfn = 0x400000000
>> 
>> Obviously, now dom0 sees the whole physical memory map and the
>> last_pfn
> is 0x21fe00 which exceed the 2G bound. I wonder why using the real
> physical memory map? At least now the sysrq is showing wrong output.
>> 
>> Here is the changeset to use the physical E820 table:
>> 
>> commit 9e9a5fcb04e3af077d1be32710298b852210d93f
>> Author: Ian Campbell <ian.campbell@xxxxxxxxxx>
>> Date:   Thu Sep 2 16:16:00 2010 +0100
>> 
>>     xen: use host E820 map for dom0
>>     
>>     When running as initial domain, get the real physical memory map
>>     from xen using the XENMEM_machine_memory_map hypercall and use it
>>     to setup the e820 regions.
>> memory info from sysrq: [   97.145517] SysRq : Show Memory [  
>> 97.146701] Mem-Info: [   97.147502] DMA per-cpu: [   97.148389] CPU   
>> 0: hi:    0, btch:   1 usd:   0 [   97.150070] CPU    1: hi:    0,
>> btch:   1 usd:   0 [   97.151750] CPU    2: hi:    0, btch:   1 usd:  
>> 0 [   97.153422] CPU    3: hi:    0, btch:   1 usd:   0 [   97.155093]
>> DMA32 per-cpu: [   97.156041] CPU    0: hi:  186, btch:  31 usd: 179 [ 
>>  97.157718] CPU    1: hi:  186, btch:  31 usd:  45 [   97.159393] CPU  
>>  2: hi:  186, btch:  31 usd: 142 [   97.160324] CPU    3: hi:  186,
>> btch:  31 usd: 153 [   97.160713] Normal per-cpu: [   97.160940] CPU   
>> 0: hi:    0, btch:   1 usd:   0 [   97.161326] CPU    1: hi:    0,
>> btch:   1 usd:   0 [   97.161715] CPU    2: hi:    0, btch:   1 usd:  
>> 0 [   97.162103] CPU    3: hi:    0, btch:   1 usd:   0 [   97.162492]
>> active_anon:54690 inactive_anon:984 isolated_anon:0 [   97.162492] 
>> active_file:25003 inactive_file:48300 isolated_file:0 [   97.162492] 
>> unevictable:2 dirty:323 writeback:0 unstable:0 [   97.162492] 
>> free:569113 slab_reclaimable:7449 slab_unreclaimable:5467 [  
>> 97.162492]  mapped:15956 shmem:1098 pagetables:10222 bounce:0 [  
>> 97.162492]  free_cma:0 [   97.165045] DMA free:12764kB min:36kB
>> low:44kB high:52kB
> active_anon:1196kB inactive_anon:12kB active_file:516kB
> inactive_file:860kB unevictable:0kB isolated(anon):0kB
> isolated(file):0kB present:15984kB managed:15896kB mlocked:0kB
> dirty:12kB writeback:0kB mapped:288kB shmem:12kB
> slab_reclaimable:160kB slab_unreclaimable:72kB kernel_stack:32kB
> pagetables:220kB unstable:0kB bounce:0kB free_cma:0kB
> writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
>> [   97.168243] lowmem_reserve[]: 0 2818 2818 2818
>> [   97.168676] DMA32 free:2263688kB min:6772kB low:8464kB
> high:10156kB active_anon:217564kB inactive_anon:3924kB
> active_file:99496kB inactive_file:192340kB unevictable:8kB
> isolated(anon):0kB isolated(file):0kB present:3329180kB
> managed:2890396kB mlocked:8kB dirty:1280kB writeback:0kB
> mapped:63536kB shmem:4380kB slab_reclaimable:29636kB
> slab_unreclaimable:21796kB kernel_stack:2560kB pagetables:40668kB
> unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0
> all_unreclaimable? no
>> [   97.244293] lowmem_reserve[]: 0 0 0 0
>> [   97.270081] Normal free:0kB min:0kB low:0kB high:0kB active_anon:0kB
> inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB
> isolated(anon):0kB isolated(file):0kB present:4716544kB managed:0kB
> mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB
> slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB
> pagetables:0kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB
> pages_scanned:0 all_unreclaimable? yes
>> [   97.355942] lowmem_reserve[]: 0 0 0 0
>> [   97.384106] DMA: 2*4kB (U) 1*8kB (U) 1*16kB (M) 2*32kB (EM) 2*64kB
> (EM) 2*128kB (UM) 2*256kB (EM) 1*512kB (E) 3*1024kB (UEM) 2*2048kB
> (ER) 1*4096kB (M) = 12768kB
>> [   97.414040] DMA32: 132*4kB (UM) 121*8kB (UEM) 71*16kB (UEM)
> 18*32kB (UEM) 2*64kB (UE) 4*128kB (UE) 2*256kB (U) 3*512kB (U) 1*1024kB
> (U) 2*2048kB (EM) 550*4096kB (MR) = 2263816kB
>> [   97.444916] Normal: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB
>> 0*512kB 0*1024kB 0*2048kB 0*4096kB = 0kB [   97.476175] Node 0
>> hugepages_total=0 hugepages_free=0 hugepages_surp=0
>> hugepages_size=2048kB [   97.508023] 74406 total pagecache pages [  
>> 97.539701] 0 pages in swap cache [   97.571126] Swap cache stats: add
>> 0, delete 0, find 0/0 [   97.602918] Free swap  = 10485756kB [  
>> 97.634737] Total swap = 10485756kB [   97.666398] 2015427 pages RAM [  
>> 97.697824] 0 pages HighMem/MovableOnly [   97.729134] 1179136 pages
>> reserved
> 
> .. output says that 1179136 pages are reserved. Which is true.
> 
> If you add the numbers and subtract the reserved pages you should get
> the exact amount of pages you booted the initial domain with.

Yes. It is correct if subtracting the reserved pages. But the problem is why 
dom0 will see it? Per my understanding, it ok to expose the real e820 table to 
dom0, but the total memory dom0 saw should be identical to the number Xen gave 
to it. Just like we add 'mem=' parameter in dom0's cmdline.

Here is an example is if adding mem=1G in dom0's boot line, we can see the page 
number is exact 262044 not whole 2015427.
[  111.125949] Swap cache stats: add 0, delete 0, find 0/0
[  111.125955] Free swap  = 10485756kB
[  111.125959] Total swap = 10485756kB
[  111.125964] 262044 pages RAM
[  111.125969] 0 pages HighMem/MovableOnly
[  111.125973] 26041 pages reserved

> 
>> 
>> best regards
>> yang
>> 
>>


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.