WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] RFC: automatic NUMA placement

To: Juergen Gross <juergen.gross@xxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] RFC: automatic NUMA placement
From: Andre Przywara <andre.przywara@xxxxxxx>
Date: Mon, 27 Sep 2010 23:46:59 +0200
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Mon, 27 Sep 2010 14:46:23 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4C921DDF.6020809@xxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <4C921DDF.6020809@xxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 2.0.0.18 (X11/20081105)
Juergen Gross wrote:
Hi,

I just stumbled upon the automatic pinning of vcpus on domain creation in
case of NUMA.
This behaviour is questionable IMO, as it breaks correct handling of
scheduling weights on NUMA machines.
I would suggest to switch this feature off per default and make it a
configuration option of xend. It would make sense, however, to change cpu pool
processor allocation to be NUMA-aware.
Switching NUMA off via boot option would remove NUMA-optimized memory
allocation, which would be sub-optimal :-)
Hi Jürgen,

stumbled over your mail just now, so sorry for the delay.
First: Don't turn off automatic NUMA placement ;-)
In my tests it helped a lot to preserve performance on NUMA machines.

I was just browsing through the ML archive to find your original CPU pools description from April, and it seems to fit the requirements in NUMA machines quite well. I haven't done any experiments with Cpupools nor haven't looked at the code yet, but just a quick idea:
What about if we marry static NUMA placement and Cpupools?

I'd suggest to introduce static NUMA pools, one for each node. The CPUs assigned to each pool are fixed and cannot be removed nor added (because the NUMA topology is fixed). Is that possible? Can we assign one physical CPUs to multiple pools (to Pool-0 and to NUMA-0?) Or are they exclusive or hierarchical like the Linux' cpusets?

We could introduce magic names for each NUMA pool, so that people just say cpupool="NUMA-2" and get their domain pinned to that pool. Without any explicit assignment the system would pick a NUMA node (like it does today) and would just use the respective Cpupool. I think that is very similar to what it does today, only that the pinning nature is more evident to the user (as it uses the Cpupool name space). Also it would allow for users to override the pinning by specifying a different Cpupool explicitly (like Pool-0).

Just tell me what you think about this and whether I am wrong with my thinking ;-)

Regards,
Andre.

--
Andre Przywara
AMD-Operating System Research Center (OSRC), Dresden, Germany
Tel: +49 351 448-3567-12


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>