Skip to content

Commit

Permalink
sched: Clean up domain traversal in select_idle_sibling()
Browse files Browse the repository at this point in the history
Instead of going through the scheduler domain hierarchy multiple times
(for giving priority to an idle core over an idle SMT sibling in a busy
core), start with the highest scheduler domain with the SD_SHARE_PKG_RESOURCES
flag and traverse the domain hierarchy down till we find an idle group.

This cleanup also addresses an issue reported by Mike where the recent
changes returned the busy thread even in the presence of an idle SMT
sibling in single socket platforms.

Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Tested-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1321556904.15339.25.camel@sbsiddha-desk.sc.intel.com
Signed-off-by: Ingo Molnar <mingo@elte.hu>
  • Loading branch information
Suresh Siddha authored and Ingo Molnar committed Dec 6, 2011
1 parent b781a60 commit 77e8136
Show file tree
Hide file tree
Showing 2 changed files with 27 additions and 13 deletions.
38 changes: 25 additions & 13 deletions kernel/sched/fair.c
Original file line number Diff line number Diff line change
Expand Up @@ -2644,6 +2644,28 @@ find_idlest_cpu(struct sched_group *group, struct task_struct *p, int this_cpu)
return idlest;
}

/**
* highest_flag_domain - Return highest sched_domain containing flag.
* @cpu: The cpu whose highest level of sched domain is to
* be returned.
* @flag: The flag to check for the highest sched_domain
* for the given cpu.
*
* Returns the highest sched_domain of a cpu which contains the given flag.
*/
static inline struct sched_domain *highest_flag_domain(int cpu, int flag)
{
struct sched_domain *sd, *hsd = NULL;

for_each_domain(cpu, sd) {
if (!(sd->flags & flag))
break;
hsd = sd;
}

return hsd;
}

/*
* Try and locate an idle CPU in the sched_domain.
*/
Expand All @@ -2653,7 +2675,7 @@ static int select_idle_sibling(struct task_struct *p, int target)
int prev_cpu = task_cpu(p);
struct sched_domain *sd;
struct sched_group *sg;
int i, smt = 0;
int i;

/*
* If the task is going to be woken-up on this cpu and if it is
Expand All @@ -2673,19 +2695,9 @@ static int select_idle_sibling(struct task_struct *p, int target)
* Otherwise, iterate the domains and find an elegible idle cpu.
*/
rcu_read_lock();
again:
for_each_domain(target, sd) {
if (!smt && (sd->flags & SD_SHARE_CPUPOWER))
continue;

if (!(sd->flags & SD_SHARE_PKG_RESOURCES)) {
if (!smt) {
smt = 1;
goto again;
}
break;
}

sd = highest_flag_domain(target, SD_SHARE_PKG_RESOURCES);
for_each_lower_domain(sd) {
sg = sd->groups;
do {
if (!cpumask_intersects(sched_group_cpus(sg),
Expand Down
2 changes: 2 additions & 0 deletions kernel/sched/sched.h
Original file line number Diff line number Diff line change
Expand Up @@ -501,6 +501,8 @@ DECLARE_PER_CPU(struct rq, runqueues);
#define for_each_domain(cpu, __sd) \
for (__sd = rcu_dereference_check_sched_domain(cpu_rq(cpu)->sd); __sd; __sd = __sd->parent)

#define for_each_lower_domain(sd) for (; sd; sd = sd->child)

#define cpu_rq(cpu) (&per_cpu(runqueues, (cpu)))
#define this_rq() (&__get_cpu_var(runqueues))
#define task_rq(p) cpu_rq(task_cpu(p))
Expand Down

0 comments on commit 77e8136

Please sign in to comment.