WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH] numa: select nodes by cpu affinity

To: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH] numa: select nodes by cpu affinity
From: Andrew Jones <drjones@xxxxxxxxxx>
Date: Wed, 04 Aug 2010 18:01:56 +0200
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, "dulloor@xxxxxxxxx" <dulloor@xxxxxxxxx>
Delivery-date: Wed, 04 Aug 2010 09:03:04 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <C87F39EC.1CBB2%keir.fraser@xxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <C87F39EC.1CBB2%keir.fraser@xxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.11) Gecko/20100720 Fedora/3.0.6-1.fc12 Lightning/1.0b2pre Thunderbird/3.0.6
On 08/04/2010 04:38 PM, Keir Fraser wrote:
> Changeset 21913 in http://xenbits.xen.org/staging/xen-unstable.hg
> 
>  -- Keir
> 
> On 04/08/2010 13:38, "Keir Fraser" <keir.fraser@xxxxxxxxxxxxx> wrote:
> 
>> Good idea. I will take this and rework it a bit and check it in.
>>
>>  Thanks,
>>  Keir
>>
>> On 04/08/2010 13:04, "Andrew Jones" <drjones@xxxxxxxxxx> wrote:
>

I also considered managing the nodemask as new domain state, as you do,
as it may come in useful elsewhere, but my principle of least patch
instincts kept me from doing it...

I'm not sure about keeping track of the last_alloc_node and then always
avoiding it (at least when there's more than 1 node) by checking it
last. I liked the way it worked before, favoring the node of the
currently running processor, but I don't have any perf numbers to know
what would be better.

I've attached a patch with a couple minor tweaks. It removes the
unnecessary node clearing from an empty initialized nodemask, and also
moves a couple of domain_update_node_affinity() calls outside
for_each_vcpu loops.

Andrew

Attachment: nodemask-tweak.diff
Description: Text document

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel