Skip to content

Commit

Permalink
sched: kill task_group balancing
Browse files Browse the repository at this point in the history
The idea was to balance groups until we've reached the global goal, however
Vatsa rightly pointed out that we might never reach that goal this way -
hence take out this logic.

[ the initial rationale for this 'feature' was to promote max concurrency
  within a group - it does not however affect fairness ]

Reported-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Cc: Mike Galbraith <efault@gmx.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
  • Loading branch information
Srivatsa Vaddagiri authored and Ingo Molnar committed Jun 27, 2008
1 parent 4d8d595 commit 53fecd8
Showing 1 changed file with 2 additions and 13 deletions.
15 changes: 2 additions & 13 deletions kernel/sched_fair.c
Original file line number Diff line number Diff line change
Expand Up @@ -1422,9 +1422,7 @@ load_balance_fair(struct rq *this_rq, int this_cpu, struct rq *busiest,

rcu_read_lock();
list_for_each_entry(tg, &task_groups, list) {
long imbalance;
unsigned long this_weight, busiest_weight;
long rem_load, max_load, moved_load;
long rem_load, moved_load;

/*
* empty group
Expand All @@ -1435,17 +1433,8 @@ load_balance_fair(struct rq *this_rq, int this_cpu, struct rq *busiest,
rem_load = rem_load_move * aggregate(tg, this_cpu)->rq_weight;
rem_load /= aggregate(tg, this_cpu)->load + 1;

this_weight = tg->cfs_rq[this_cpu]->task_weight;
busiest_weight = tg->cfs_rq[busiest_cpu]->task_weight;

imbalance = (busiest_weight - this_weight) / 2;

if (imbalance < 0)
imbalance = busiest_weight;

max_load = max(rem_load, imbalance);
moved_load = __load_balance_fair(this_rq, this_cpu, busiest,
max_load, sd, idle, all_pinned, this_best_prio,
rem_load, sd, idle, all_pinned, this_best_prio,
tg->cfs_rq[busiest_cpu]);

if (!moved_load)
Expand Down

0 comments on commit 53fecd8

Please sign in to comment.