Skip to content

Commit

Permalink
[PATCH] sched: fix newly idle load balance in case of SMT
Browse files Browse the repository at this point in the history
In the presence of SMT, newly idle balance was never happening for
multi-core and SMP domains (even when both the logical siblings are
idle).

If thread 0 is already idle and when thread 1 is about to go to idle,
newly idle load balance always think that one of the threads is not idle
and skips doing the newly idle load balance for multi-core and SMP
domains.

This is because of the idle_cpu() macro, which checks if the current
process on a cpu is an idle process. But this is not the case for the
thread doing the load_balance_newidle().

Fix this by using runqueue's nr_running field instead of idle_cpu(). And
also skip the logic of 'only one idle cpu in the group will be doing
load balancing' during newly idle case.

Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
  • Loading branch information
Suresh Siddha authored and Ingo Molnar committed Jul 19, 2007
1 parent c41917d commit 9439aab
Showing 1 changed file with 5 additions and 3 deletions.
8 changes: 5 additions & 3 deletions kernel/sched.c
Original file line number Diff line number Diff line change
Expand Up @@ -2235,7 +2235,7 @@ find_busiest_group(struct sched_domain *sd, int this_cpu,

rq = cpu_rq(i);

if (*sd_idle && !idle_cpu(i))
if (*sd_idle && rq->nr_running)
*sd_idle = 0;

/* Bias balancing toward cpus of our domain */
Expand All @@ -2257,9 +2257,11 @@ find_busiest_group(struct sched_domain *sd, int this_cpu,
/*
* First idle cpu or the first cpu(busiest) in this sched group
* is eligible for doing load balancing at this and above
* domains.
* domains. In the newly idle case, we will allow all the cpu's
* to do the newly idle load balance.
*/
if (local_group && balance_cpu != this_cpu && balance) {
if (idle != CPU_NEWLY_IDLE && local_group &&
balance_cpu != this_cpu && balance) {
*balance = 0;
goto ret;
}
Expand Down

0 comments on commit 9439aab

Please sign in to comment.