WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Problem with nr_nodes on large memory NUMA machine

To: Xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] Problem with nr_nodes on large memory NUMA machine
From: beth kon <eak@xxxxxxxxxx>
Date: Fri, 19 Oct 2007 11:02:05 -0400
Delivery-date: Fri, 19 Oct 2007 08:03:24 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: IBM
Reply-to: eak@xxxxxxxxxx
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla Thunderbird 1.0.8-1.1.fc4 (X11/20060501)
We've run into an issue with an 8 node x3950 where xm info is showing only 6 nodes. I've traced the problem to the clip_to_limit function in arch/x86/e820.c.

#ifdef __x86_64__
  clip_to_limit((uint64_t)(MACH2PHYS_COMPAT_VIRT_END -
                           __HYPERVISOR_COMPAT_VIRT_START) << 10,
                "Only the first %u GB of the physical memory map "
                "can be accessed by 32-on-64 guests.");
#endif

Boot messages....
(XEN) WARNING: Only the first 166 GB of the physical memory map can be accessed by 32-on-64 guests. (XEN) Truncating memory map to 174063616kB

After the memory is clipped, acpi_scan_nodes runs cutoff_node, which limits the memory associated with each node according to the cutoff values. Then, acpi_scan_nodes calls unparse_node to "remove" nodes that don't have the minimum amount of memory, due to the clipping of the memory range.

Can someone explain what this is all about and why it might be necessary?

--
Elizabeth Kon (Beth)
IBM Linux Technology Center
Open Hypervisor Team
email: eak@xxxxxxxxxx


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>