Skip to content

Commit

Permalink
sched/fair: Optimize find_busiest_queue()
Browse files Browse the repository at this point in the history
Use for_each_cpu_and() and thereby avoid computing the capacity for
CPUs we know we're not interested in.

Reviewed-by: Paul Turner <pjt@google.com>
Reviewed-by: Preeti U Murthy <preeti@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/n/tip-lppceyv6kb3a19g8spmrn20b@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
  • Loading branch information
Peter Zijlstra authored and Ingo Molnar committed Sep 2, 2013
1 parent 3ae11c9 commit 6906a40
Showing 1 changed file with 1 addition and 4 deletions.
5 changes: 1 addition & 4 deletions kernel/sched/fair.c
Original file line number Diff line number Diff line change
Expand Up @@ -4946,7 +4946,7 @@ static struct rq *find_busiest_queue(struct lb_env *env,
unsigned long busiest_load = 0, busiest_power = 1;
int i;

for_each_cpu(i, sched_group_cpus(group)) {
for_each_cpu_and(i, sched_group_cpus(group), env->cpus) {
unsigned long power = power_of(i);
unsigned long capacity = DIV_ROUND_CLOSEST(power,
SCHED_POWER_SCALE);
Expand All @@ -4955,9 +4955,6 @@ static struct rq *find_busiest_queue(struct lb_env *env,
if (!capacity)
capacity = fix_small_capacity(env->sd, group);

if (!cpumask_test_cpu(i, env->cpus))
continue;

rq = cpu_rq(i);
wl = weighted_cpuload(i);

Expand Down

0 comments on commit 6906a40

Please sign in to comment.