[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] RE: [PATCH] x86: fix NUMA handling (c/s 20599)




>-----Original Message-----
>From: Jan Beulich [mailto:JBeulich@xxxxxxxxxx]
>Sent: Thursday, January 07, 2010 3:42 PM
>To: Jiang, Yunhong
>Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
>Subject: RE: [PATCH] x86: fix NUMA handling (c/s 20599)
>
>>>> "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx> 07.01.10 04:03 >>>
>>But for this specific SRAT, seems we can also improve constructing of
>>node_memblk_range[], so that we can merge 0~a0000 with
>>100000-80000000, and  80000000-d0000000 with
>>100000000-130000000. Depends on whether we need keep BIOS's
>>memory affinity information, we can either create a new structure to
>>hold the result, or we can simply compact node_memblk_range[]. The
>>benefit is, we can reduce the sizeof memnodemap, but I'm not sure
>>if it is needed still after this patch.
>
>Yes, I had considered this too. But since this is code ported from Linux,
>I'd like to get buy-off on this on the Linux side first. (And yes, I do
>think this would be an improvement - not just because of the memory
>savings, but also because of the [perhaps significantly] reduced
>cache footprint resulting from the array accesses: Only two array
>elements are really needed for the shown memory layout.)
>Why would you, btw., think that BIOS affinity information would get
>lost when merging entries in this case?

After all, for this SRAT table, address 0xb0000 does not belongs to node 0, 
while it belongs to 0 after the merge. Yes, I agree it does not effect software 
(OS/VMM), but still, it is different. 

>
>Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.