[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] Xen 3.4.1 NUMA support



> Overcommitting the nodes (letting multiple guests use each node) lowered
> the values to about 80% for two guests and 60% for three guests per
> node, but it never got anywhere close to the numa=off values.
> So these results encourage me again to opt for numa=on as the default
> value.
> Keir, I will check if dropping the node containment in the CPU
> overcommitment case is an option, but what would be the right strategy
> in that case?
> Warn the user?
> Don't contain at all?
> Contain to more than onde node?

In the case where a VM is asking for more vCPUs there are pCPUs in a node we 
should contain the guest to multiple nodes. (I presume we favour nodes 
according to the number of vCPUs they already have committed to them?)

We should turn off automatic node containment of any kind if the total number 
of pCPUs in the system is <= 8  -- on such systems the statistical multiplexing 
gain of having access to more pCPUs likely outweighs the NUMA placement benefit 
and memory striping will be a better strategy.
I'm inclined to believe that may be true for 2 node systems with <=16 pCPUs too 
under many workloads 

I'd really like to see us enumerate pCPUs in a sensible order so that it's 
easier to see the topology.  It should be nodes.sockets.cores{.threads}, 
leaving gaps for missing execution units due to hot plug or non power of two 
packing. 
Right now we're inconsistent in the enumeration order depending on how the BIOS 
has set things up. It would be great if someone could volunteer to fix this...

Ian



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.