Skip to content

Commit

Permalink
---
Browse files Browse the repository at this point in the history
yaml
---
r: 158377
b: refs/heads/master
c: 71a29aa
h: refs/heads/master
i:
  158375: 10fb5bc
v: v3
  • Loading branch information
Peter Zijlstra authored and Ingo Molnar committed Sep 7, 2009
1 parent 85e0e52 commit a45ad0c
Show file tree
Hide file tree
Showing 2 changed files with 12 additions and 2 deletions.
2 changes: 1 addition & 1 deletion [refs]
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
---
refs/heads/master: cdd2ab3de4301728b20efd6225681d3ff591a938
refs/heads/master: 71a29aa7b600595d0ef373ea605ac656876d1f2f
12 changes: 11 additions & 1 deletion trunk/kernel/sched_fair.c
Original file line number Diff line number Diff line change
Expand Up @@ -1262,7 +1262,17 @@ wake_affine(struct sched_domain *this_sd, struct rq *this_rq,
tg = task_group(p);
weight = p->se.load.weight;

balanced = 100*(tl + effective_load(tg, this_cpu, weight, weight)) <=
/*
* In low-load situations, where prev_cpu is idle and this_cpu is idle
* due to the sync cause above having dropped tl to 0, we'll always have
* an imbalance, but there's really nothing you can do about that, so
* that's good too.
*
* Otherwise check if either cpus are near enough in load to allow this
* task to be woken on this_cpu.
*/
balanced = !tl ||
100*(tl + effective_load(tg, this_cpu, weight, weight)) <=
imbalance*(load + effective_load(tg, prev_cpu, 0, weight));

/*
Expand Down

0 comments on commit a45ad0c

Please sign in to comment.