WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] RE: [PATCH] x86: fix NUMA handling (c/s 20599)

To: Jan Beulich <JBeulich@xxxxxxxxxx>
Subject: [Xen-devel] RE: [PATCH] x86: fix NUMA handling (c/s 20599)
From: "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx>
Date: Thu, 7 Jan 2010 16:01:02 +0800
Accept-language: en-US
Acceptlanguage: en-US
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Thu, 07 Jan 2010 00:03:15 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4B459E720200007800028950@xxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <4B44BBF30200007800028783@xxxxxxxxxxxxxxxxxx> <C8EDE645B81E5141A8C6B2F73FD92651138B597688@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx> <4B459E720200007800028950@xxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcqPbOoaiAovkyj0Sziydmr+aJ7TOwAAfjFA
Thread-topic: [PATCH] x86: fix NUMA handling (c/s 20599)

>-----Original Message-----
>From: Jan Beulich [mailto:JBeulich@xxxxxxxxxx]
>Sent: Thursday, January 07, 2010 3:42 PM
>To: Jiang, Yunhong
>Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
>Subject: RE: [PATCH] x86: fix NUMA handling (c/s 20599)
>
>>>> "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx> 07.01.10 04:03 >>>
>>But for this specific SRAT, seems we can also improve constructing of
>>node_memblk_range[], so that we can merge 0~a0000 with
>>100000-80000000, and  80000000-d0000000 with
>>100000000-130000000. Depends on whether we need keep BIOS's
>>memory affinity information, we can either create a new structure to
>>hold the result, or we can simply compact node_memblk_range[]. The
>>benefit is, we can reduce the sizeof memnodemap, but I'm not sure
>>if it is needed still after this patch.
>
>Yes, I had considered this too. But since this is code ported from Linux,
>I'd like to get buy-off on this on the Linux side first. (And yes, I do
>think this would be an improvement - not just because of the memory
>savings, but also because of the [perhaps significantly] reduced
>cache footprint resulting from the array accesses: Only two array
>elements are really needed for the shown memory layout.)
>Why would you, btw., think that BIOS affinity information would get
>lost when merging entries in this case?

After all, for this SRAT table, address 0xb0000 does not belongs to node 0, 
while it belongs to 0 after the merge. Yes, I agree it does not effect software 
(OS/VMM), but still, it is different. 

>
>Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>