Skip to content

Commit

Permalink
sched/fair: Clean up update_sg_lb_stats() a bit
Browse files Browse the repository at this point in the history
Add rq->nr_running to sgs->sum_nr_running directly instead of
assigning it through an intermediate variable nr_running.

Signed-off-by: Kamalesh Babulal <kamalesh@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1384508212-25032-1-git-send-email-kamalesh@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
  • Loading branch information
Kamalesh Babulal authored and Ingo Molnar committed Nov 27, 2013
1 parent c44f2a0 commit 380c907
Showing 1 changed file with 1 addition and 4 deletions.
5 changes: 1 addition & 4 deletions kernel/sched/fair.c
Original file line number Diff line number Diff line change
Expand Up @@ -5500,7 +5500,6 @@ static inline void update_sg_lb_stats(struct lb_env *env,
struct sched_group *group, int load_idx,
int local_group, struct sg_lb_stats *sgs)
{
unsigned long nr_running;
unsigned long load;
int i;

Expand All @@ -5509,16 +5508,14 @@ static inline void update_sg_lb_stats(struct lb_env *env,
for_each_cpu_and(i, sched_group_cpus(group), env->cpus) {
struct rq *rq = cpu_rq(i);

nr_running = rq->nr_running;

/* Bias balancing toward cpus of our domain */
if (local_group)
load = target_load(i, load_idx);
else
load = source_load(i, load_idx);

sgs->group_load += load;
sgs->sum_nr_running += nr_running;
sgs->sum_nr_running += rq->nr_running;
#ifdef CONFIG_NUMA_BALANCING
sgs->nr_numa_running += rq->nr_numa_running;
sgs->nr_preferred_running += rq->nr_preferred_running;
Expand Down

0 comments on commit 380c907

Please sign in to comment.