Skip to content

Commit

Permalink
sched/numa: Avoid selecting oneself as swap target
Browse files Browse the repository at this point in the history
Because the whole numa task selection stuff runs with preemption
enabled (its long and expensive) we can end up migrating and selecting
oneself as a swap target. This doesn't really work out well -- we end
up trying to acquire the same lock twice for the swap migrate -- so
avoid this.

Reported-and-Tested-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/20141110100328.GF29390@twins.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
  • Loading branch information
Peter Zijlstra authored and Ingo Molnar committed Nov 16, 2014
1 parent c123588 commit 7af6833
Showing 1 changed file with 7 additions and 0 deletions.
7 changes: 7 additions & 0 deletions kernel/sched/fair.c
Original file line number Diff line number Diff line change
Expand Up @@ -1179,6 +1179,13 @@ static void task_numa_compare(struct task_numa_env *env,
cur = NULL;
raw_spin_unlock_irq(&dst_rq->lock);

/*
* Because we have preemption enabled we can get migrated around and
* end try selecting ourselves (current == env->p) as a swap candidate.
*/
if (cur == env->p)
goto unlock;

/*
* "imp" is the fault differential for the source task between the
* source and destination node. Calculate the total differential for
Expand Down

0 comments on commit 7af6833

Please sign in to comment.