Skip to content

Commit

Permalink
[PATCH] sched: less aggressive idle balancing
Browse files Browse the repository at this point in the history
Remove the special casing for idle CPU balancing.  Things like this are
hurting for example on SMT, where are single sibling being idle doesn't really
warrant a really aggressive pull over the NUMA domain, for example.

Signed-off-by: Nick Piggin <nickpiggin@yahoo.com.au>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
  • Loading branch information
Nick Piggin authored and Linus Torvalds committed Jun 25, 2005
1 parent db935db commit 99b61cc
Showing 1 changed file with 0 additions and 6 deletions.
6 changes: 0 additions & 6 deletions kernel/sched.c
Original file line number Diff line number Diff line change
Expand Up @@ -1877,15 +1877,9 @@ find_busiest_group(struct sched_domain *sd, int this_cpu,

/* Get rid of the scaling factor, rounding down as we divide */
*imbalance = *imbalance / SCHED_LOAD_SCALE;

return busiest;

out_balanced:
if (busiest && (idle == NEWLY_IDLE ||
(idle == SCHED_IDLE && max_load > SCHED_LOAD_SCALE)) ) {
*imbalance = 1;
return busiest;
}

*imbalance = 0;
return NULL;
Expand Down

0 comments on commit 99b61cc

Please sign in to comment.