WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] RFC: large system support - 128 CPUs

To: "Bill Burns" <bburns@xxxxxxxxxx>
Subject: Re: [Xen-devel] RFC: large system support - 128 CPUs
From: "Jan Beulich" <jbeulich@xxxxxxxxxx>
Date: Wed, 13 Aug 2008 09:21:14 +0100
Cc: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Wed, 13 Aug 2008 01:20:46 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <48A1D946.4040608@xxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <48A1D946.4040608@xxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Both seem to be hacks to get to 128 CPUs, without consideration of how
to go beyond that, or perhaps even drop the fixed (compile-time) limit
altogether. Since we have to expect to be run on larger systems not too
far into the future, I think it rather needs to be explored how to address
these issues (and any potential others) in a fully scalable way.

Jan

>>> Bill Burns <bburns@xxxxxxxxxx> 12.08.08 20:41 >>>

There are a couple of issues with building the Hypervisor
with max_phys_cpus=128 for x86_64. (Note that this was
on a 3.1 base, but unstable appears to have
the same issue, at least with the first part).

First is a build assertion due to the size of
the page_info structure and the shadow_page_info
structures get out of sync due to the presence
of cpumask_t in the page info structure.

A possible fix is to tack on the following to
the end of shadow_page_info structure:

--- xen/arch/x86/mm/shadow/private.h.orig       2007-12-06 12:48:38.000000000 
-0500
+++ xen/arch/x86/mm/shadow/private.h    2008-08-12 12:52:49.000000000 -0400
@@ -243,6 +243,12 @@ struct shadow_page_info
         /* For non-pinnable shadows, a higher entry that points at us */
         paddr_t up;
     };
+#if NR_CPUS > 64
+    /* Need to add some padding to match struct page_info size,
+    * if cpumask_t is larger than a long
+    */
+    u8 padding[sizeof(cpumask_t)-sizeof(long)];
+#endif
 };

 /* The structure above *must* be the same size as a struct page_info



The other issue is at runtime with a fault when
trying to bring up cpu 126. Seems the GDT space
reserved is not quite enough to hold the per
cpu entries. Crude fix (awaiting test results,
so not sure that this is sufficient.):

--- xen/include/asm-x86/desc.h.orig     2007-12-06 12:48:39.000000000 -0500
+++ xen/include/asm-x86/desc.h  2008-07-31 13:19:52.000000000 -0400
@@ -5,7 +5,11 @@
  * Xen reserves a memory page of GDT entries.
  * No guest GDT entries exist beyond the Xen reserved area.
  */
+#if MAX_PHYS_CPUS > 64
+#define NR_RESERVED_GDT_PAGES   2
+#else
 #define NR_RESERVED_GDT_PAGES   1
+#endif
 #define NR_RESERVED_GDT_BYTES   (NR_RESERVED_GDT_PAGES * PAGE_SIZE)
 #define NR_RESERVED_GDT_ENTRIES (NR_RESERVED_GDT_BYTES / 8)


Bill



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx 
http://lists.xensource.com/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel