Skip to content

Commit

Permalink
sched/fair: Carve out logic to mark a group for asymmetric packing
Browse files Browse the repository at this point in the history
Create a separate function, sched_asym(). A subsequent changeset will
introduce logic to deal with SMT in conjunction with asmymmetric
packing. Such logic will need the statistics of the scheduling
group provided as argument. Update them before calling sched_asym().

Co-developed-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org>
Reviewed-by: Len Brown <len.brown@intel.com>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Link: https://lkml.kernel.org/r/20210911011819.12184-6-ricardo.neri-calderon@linux.intel.com
  • Loading branch information
Ricardo Neri authored and Peter Zijlstra committed Oct 5, 2021
1 parent c0d14b5 commit aafc917
Showing 1 changed file with 13 additions and 7 deletions.
20 changes: 13 additions & 7 deletions kernel/sched/fair.c
Original file line number Diff line number Diff line change
Expand Up @@ -8571,6 +8571,13 @@ group_type group_classify(unsigned int imbalance_pct,
return group_has_spare;
}

static inline bool
sched_asym(struct lb_env *env, struct sd_lb_stats *sds, struct sg_lb_stats *sgs,
struct sched_group *group)
{
return sched_asym_prefer(env->dst_cpu, group->asym_prefer_cpu);
}

/**
* update_sg_lb_stats - Update sched_group's statistics for load balancing.
* @env: The load balancing environment.
Expand Down Expand Up @@ -8631,18 +8638,17 @@ static inline void update_sg_lb_stats(struct lb_env *env,
}
}

sgs->group_capacity = group->sgc->capacity;

sgs->group_weight = group->group_weight;

/* Check if dst CPU is idle and preferred to this group */
if (!local_group && env->sd->flags & SD_ASYM_PACKING &&
env->idle != CPU_NOT_IDLE &&
sgs->sum_h_nr_running &&
sched_asym_prefer(env->dst_cpu, group->asym_prefer_cpu)) {
env->idle != CPU_NOT_IDLE && sgs->sum_h_nr_running &&
sched_asym(env, sds, sgs, group)) {
sgs->group_asym_packing = 1;
}

sgs->group_capacity = group->sgc->capacity;

sgs->group_weight = group->group_weight;

sgs->group_type = group_classify(env->sd->imbalance_pct, group, sgs);

/* Computing avg_load makes sense only when group is overloaded */
Expand Down

0 comments on commit aafc917

Please sign in to comment.