Skip to content

Commit

Permalink
[PATCH] cpuset: memory_spread_slab drop useless PF_SPREAD_PAGE check
Browse files Browse the repository at this point in the history
The hook in the slab cache allocation path to handle cpuset memory
spreading for tasks in cpusets with 'memory_spread_slab' enabled has a
modest performance bug.  The hook calls into the memory spreading handler
alternate_node_alloc() if either of 'memory_spread_slab' or
'memory_spread_page' is enabled, even though the handler does nothing
(albeit harmlessly) for the page case

Fix - drop PF_SPREAD_PAGE from the set of flag bits that are used to
trigger a call to alternate_node_alloc().

The page case is handled by separate hooks -- see the calls conditioned on
cpuset_do_page_mem_spread() in mm/filemap.c

Signed-off-by: Paul Jackson <pj@sgi.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
  • Loading branch information
Paul Jackson authored and Linus Torvalds committed Mar 24, 2006
1 parent 151a442 commit b245539
Showing 1 changed file with 2 additions and 3 deletions.
5 changes: 2 additions & 3 deletions mm/slab.c
Original file line number Diff line number Diff line change
Expand Up @@ -2809,8 +2809,7 @@ static inline void *____cache_alloc(struct kmem_cache *cachep, gfp_t flags)
struct array_cache *ac;

#ifdef CONFIG_NUMA
if (unlikely(current->flags & (PF_SPREAD_PAGE | PF_SPREAD_SLAB |
PF_MEMPOLICY))) {
if (unlikely(current->flags & (PF_SPREAD_SLAB | PF_MEMPOLICY))) {
objp = alternate_node_alloc(cachep, flags);
if (objp != NULL)
return objp;
Expand Down Expand Up @@ -2849,7 +2848,7 @@ static __always_inline void *__cache_alloc(struct kmem_cache *cachep,

#ifdef CONFIG_NUMA
/*
* Try allocating on another node if PF_SPREAD_PAGE|PF_SPREAD_SLAB|PF_MEMPOLICY.
* Try allocating on another node if PF_SPREAD_SLAB|PF_MEMPOLICY.
*
* If we are in_interrupt, then process context, including cpusets and
* mempolicy, may not apply and should not be used for allocation policy.
Expand Down

0 comments on commit b245539

Please sign in to comment.