Skip to content

Commit

Permalink
sched: fix MC/HT scheduler optimization, without breaking the FUZZ lo…
Browse files Browse the repository at this point in the history
…gic.

First fix the check
	if (*imbalance + SCHED_LOAD_SCALE_FUZZ < busiest_load_per_task)
with this
	if (*imbalance < busiest_load_per_task)

As the current check is always false for nice 0 tasks (as
SCHED_LOAD_SCALE_FUZZ is same as busiest_load_per_task for nice 0
tasks).

With the above change, imbalance was getting reset to 0 in the corner
case condition, making the FUZZ logic fail. Fix it by not corrupting the
imbalance and change the imbalance, only when it finds that the HT/MC
optimization is needed.

Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
  • Loading branch information
Suresh Siddha authored and Ingo Molnar committed Sep 5, 2007
1 parent b21010e commit 7fd0d2d
Showing 1 changed file with 3 additions and 5 deletions.
8 changes: 3 additions & 5 deletions kernel/sched.c
Original file line number Diff line number Diff line change
Expand Up @@ -2512,7 +2512,7 @@ find_busiest_group(struct sched_domain *sd, int this_cpu,
* a think about bumping its value to force at least one task to be
* moved
*/
if (*imbalance + SCHED_LOAD_SCALE_FUZZ < busiest_load_per_task) {
if (*imbalance < busiest_load_per_task) {
unsigned long tmp, pwr_now, pwr_move;
unsigned int imbn;

Expand Down Expand Up @@ -2564,10 +2564,8 @@ find_busiest_group(struct sched_domain *sd, int this_cpu,
pwr_move /= SCHED_LOAD_SCALE;

/* Move if we gain throughput */
if (pwr_move <= pwr_now)
goto out_balanced;

*imbalance = busiest_load_per_task;
if (pwr_move > pwr_now)
*imbalance = busiest_load_per_task;
}

return busiest;
Expand Down

0 comments on commit 7fd0d2d

Please sign in to comment.