WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] RFC: large system support - 128 CPUs

To: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] RFC: large system support - 128 CPUs
From: Bill Burns <bburns@xxxxxxxxxx>
Date: Tue, 12 Aug 2008 14:41:10 -0400
Delivery-date: Tue, 12 Aug 2008 11:41:32 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 2.0.0.14 (X11/20080515)
There are a couple of issues with building the Hypervisor
with max_phys_cpus=128 for x86_64. (Note that this was
on a 3.1 base, but unstable appears to have
the same issue, at least with the first part).

First is a build assertion due to the size of
the page_info structure and the shadow_page_info
structures get out of sync due to the presence
of cpumask_t in the page info structure.

A possible fix is to tack on the following to
the end of shadow_page_info structure:

--- xen/arch/x86/mm/shadow/private.h.orig       2007-12-06 12:48:38.000000000 
-0500
+++ xen/arch/x86/mm/shadow/private.h    2008-08-12 12:52:49.000000000 -0400
@@ -243,6 +243,12 @@ struct shadow_page_info
         /* For non-pinnable shadows, a higher entry that points at us */
         paddr_t up;
     };
+#if NR_CPUS > 64
+    /* Need to add some padding to match struct page_info size,
+    * if cpumask_t is larger than a long
+    */
+    u8 padding[sizeof(cpumask_t)-sizeof(long)];
+#endif
 };

 /* The structure above *must* be the same size as a struct page_info



The other issue is at runtime with a fault when
trying to bring up cpu 126. Seems the GDT space
reserved is not quite enough to hold the per
cpu entries. Crude fix (awaiting test results,
so not sure that this is sufficient.):

--- xen/include/asm-x86/desc.h.orig     2007-12-06 12:48:39.000000000 -0500
+++ xen/include/asm-x86/desc.h  2008-07-31 13:19:52.000000000 -0400
@@ -5,7 +5,11 @@
  * Xen reserves a memory page of GDT entries.
  * No guest GDT entries exist beyond the Xen reserved area.
  */
+#if MAX_PHYS_CPUS > 64
+#define NR_RESERVED_GDT_PAGES   2
+#else
 #define NR_RESERVED_GDT_PAGES   1
+#endif
 #define NR_RESERVED_GDT_BYTES   (NR_RESERVED_GDT_PAGES * PAGE_SIZE)
 #define NR_RESERVED_GDT_ENTRIES (NR_RESERVED_GDT_BYTES / 8)


Bill



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel