Skip to content

Commit

Permalink
sched: Clean up some f_b_g() comments
Browse files Browse the repository at this point in the history
The existing comment tends to grow state (as it already has), split it
up and place it near the actual tests.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Nikhil Rao <ncrao@google.com>
Cc: Venkatesh Pallipadi <venki@google.com>
Cc: Suresh Siddha <suresh.b.siddha@intel.com>
Cc: Mike Galbraith <efault@gmx.de>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
  • Loading branch information
Peter Zijlstra authored and Ingo Molnar committed Feb 23, 2011
1 parent c186faf commit cc57aa8
Showing 1 changed file with 13 additions and 15 deletions.
28 changes: 13 additions & 15 deletions kernel/sched_fair.c
Original file line number Diff line number Diff line change
Expand Up @@ -3113,19 +3113,9 @@ find_busiest_group(struct sched_domain *sd, int this_cpu,
*/
update_sd_lb_stats(sd, this_cpu, idle, cpus, balance, &sds);

/* Cases where imbalance does not exist from POV of this_cpu */
/* 1) this_cpu is not the appropriate cpu to perform load balancing
* at this level.
* 2) There is no busy sibling group to pull from.
* 3) This group is the busiest group.
* 4) This group is more busy than the avg busieness at this
* sched_domain.
* 5) The imbalance is within the specified limit.
*
* Note: when doing newidle balance, if the local group has excess
* capacity (i.e. nr_running < group_capacity) and the busiest group
* does not have any capacity, we force a load balance to pull tasks
* to the local group. In this case, we skip past checks 3, 4 and 5.
/*
* this_cpu is not the appropriate cpu to perform load balancing at
* this level.
*/
if (!(*balance))
goto ret;
Expand All @@ -3134,19 +3124,27 @@ find_busiest_group(struct sched_domain *sd, int this_cpu,
check_asym_packing(sd, &sds, this_cpu, imbalance))
return sds.busiest;

/* There is no busy sibling group to pull tasks from */
if (!sds.busiest || sds.busiest_nr_running == 0)
goto out_balanced;

/* SD_BALANCE_NEWIDLE trumps SMP nice when underutilized */
/* SD_BALANCE_NEWIDLE trumps SMP nice when underutilized */
if (idle == CPU_NEWLY_IDLE && sds.this_has_capacity &&
!sds.busiest_has_capacity)
goto force_balance;

/*
* If the local group is more busy than the selected busiest group
* don't try and pull any tasks.
*/
if (sds.this_load >= sds.max_load)
goto out_balanced;

/*
* Don't pull any tasks if this group is already above the domain
* average load.
*/
sds.avg_load = (SCHED_LOAD_SCALE * sds.total_load) / sds.total_pwr;

if (sds.this_load >= sds.avg_load)
goto out_balanced;

Expand Down

0 comments on commit cc57aa8

Please sign in to comment.