Skip to content

Commit

Permalink
x86/numa: Improve internode cache alignment
Browse files Browse the repository at this point in the history
Currently cache alignment among nodes in the kernel is still 128
bytes on x86 NUMA machines - we got that X86_INTERNODE_CACHE_SHIFT
default from old P4 processors.

But now most modern x86 CPUs use the same size: 64 bytes from L1 to
last level L3. so let's remove the incorrect setting, and directly
use the L1 cache size to do SMP cache line alignment.

This patch saves some memory space on kernel data, and it also
improves the cache locality of kernel data.

The System.map is quite different with/without this change:

	before patch			after patch
  ...
  000000000000b000 d tlb_vector_|  000000000000b000 d tlb_vector
  000000000000b080 d cpu_loops_p|  000000000000b040 d cpu_loops_
  ...

Signed-off-by: Alex Shi <alex.shi@intel.com>
Cc: asit.k.mallick@intel.com
Link: http://lkml.kernel.org/r/1330774047-18597-1-git-send-email-alex.shi@intel.com
Signed-off-by: Ingo Molnar <mingo@elte.hu>
  • Loading branch information
Alex Shi authored and Ingo Molnar committed Mar 5, 2012
1 parent e24b90b commit 901b044
Showing 1 changed file with 0 additions and 1 deletion.
1 change: 0 additions & 1 deletion arch/x86/Kconfig.cpu
Original file line number Diff line number Diff line change
Expand Up @@ -303,7 +303,6 @@ config X86_GENERIC
config X86_INTERNODE_CACHE_SHIFT
int
default "12" if X86_VSMP
default "7" if NUMA
default X86_L1_CACHE_SHIFT

config X86_CMPXCHG
Expand Down

0 comments on commit 901b044

Please sign in to comment.