Skip to content

Commit

Permalink
sched/fair: Spread out tasks evenly when not overloaded
Browse files Browse the repository at this point in the history
When there is only one CPU per group, using the idle CPUs to evenly spread
tasks doesn't make sense and nr_running is a better metrics.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Ben Segall <bsegall@google.com>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Morten.Rasmussen@arm.com
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: hdanton@sina.com
Cc: parth@linux.ibm.com
Cc: pauld@redhat.com
Cc: quentin.perret@arm.com
Cc: riel@surriel.com
Cc: srikar@linux.vnet.ibm.com
Cc: valentin.schneider@arm.com
Link: https://lkml.kernel.org/r/1571405198-27570-8-git-send-email-vincent.guittot@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
  • Loading branch information
Vincent Guittot authored and Ingo Molnar committed Oct 21, 2019
1 parent b0fb1eb commit 2ab4092
Showing 1 changed file with 28 additions and 12 deletions.
40 changes: 28 additions & 12 deletions kernel/sched/fair.c
Original file line number Diff line number Diff line change
Expand Up @@ -8591,18 +8591,34 @@ static struct sched_group *find_busiest_group(struct lb_env *env)
busiest->sum_nr_running > local->sum_nr_running + 1)
goto force_balance;

if (busiest->group_type != group_overloaded &&
(env->idle == CPU_NOT_IDLE ||
local->idle_cpus <= (busiest->idle_cpus + 1)))
/*
* If the busiest group is not overloaded
* and there is no imbalance between this and busiest group
* wrt. idle CPUs, it is balanced. The imbalance
* becomes significant if the diff is greater than 1 otherwise
* we might end up just moving the imbalance to another
* group.
*/
goto out_balanced;
if (busiest->group_type != group_overloaded) {
if (env->idle == CPU_NOT_IDLE)
/*
* If the busiest group is not overloaded (and as a
* result the local one too) but this CPU is already
* busy, let another idle CPU try to pull task.
*/
goto out_balanced;

if (busiest->group_weight > 1 &&
local->idle_cpus <= (busiest->idle_cpus + 1))
/*
* If the busiest group is not overloaded
* and there is no imbalance between this and busiest
* group wrt idle CPUs, it is balanced. The imbalance
* becomes significant if the diff is greater than 1
* otherwise we might end up to just move the imbalance
* on another group. Of course this applies only if
* there is more than 1 CPU per group.
*/
goto out_balanced;

if (busiest->sum_h_nr_running == 1)
/*
* busiest doesn't have any tasks waiting to run
*/
goto out_balanced;
}

force_balance:
/* Looks like there is an imbalance. Compute it */
Expand Down

0 comments on commit 2ab4092

Please sign in to comment.