Skip to content

Commit

Permalink
sched/fair: Defer calculation of 'prev_eff_load' in wake_affine_weigh…
Browse files Browse the repository at this point in the history
…t() until needed

On sync wakeups, the previous CPU effective load may not be used so delay
the calculation until it's needed.

Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Giovanni Gherdovich <ggherdovich@suse.cz>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Matt Fleming <matt@codeblueprint.co.uk>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20180213133730.24064-3-mgorman@techsingularity.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
  • Loading branch information
Mel Gorman authored and Ingo Molnar committed Feb 21, 2018
1 parent 7ebb66a commit eeb6039
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion kernel/sched/fair.c
Original file line number Diff line number Diff line change
Expand Up @@ -5724,7 +5724,6 @@ wake_affine_weight(struct sched_domain *sd, struct task_struct *p,
unsigned long task_load;

this_eff_load = target_load(this_cpu, sd->wake_idx);
prev_eff_load = source_load(prev_cpu, sd->wake_idx);

if (sync) {
unsigned long current_load = task_h_load(current);
Expand All @@ -5742,6 +5741,7 @@ wake_affine_weight(struct sched_domain *sd, struct task_struct *p,
this_eff_load *= 100;
this_eff_load *= capacity_of(prev_cpu);

prev_eff_load = source_load(prev_cpu, sd->wake_idx);
prev_eff_load -= task_load;
if (sched_feat(WA_BIAS))
prev_eff_load *= 100 + (sd->imbalance_pct - 100) / 2;
Expand Down

0 comments on commit eeb6039

Please sign in to comment.