Skip to content

Commit

Permalink
sched/rt: cpupri_find: Trigger a full search as fallback
Browse files Browse the repository at this point in the history
If we failed to find a fitting CPU, in cpupri_find(), we only fallback
to the level we found a hit at.

But Steve suggested to fallback to a second full scan instead as this
could be a better effort.

	https://lore.kernel.org/lkml/20200304135404.146c56eb@gandalf.local.home/

We trigger the 2nd search unconditionally since the argument about
triggering a full search is that the recorded fall back level might have
become empty by then. Which means storing any data about what happened
would be meaningless and stale.

I had a humble try at timing it and it seemed okay for the small 6 CPUs
system I was running on

	https://lore.kernel.org/lkml/20200305124324.42x6ehjxbnjkklnh@e107158-lin.cambridge.arm.com/

On large system this second full scan could be expensive. But there are
no users outside capacity awareness for this fitness function at the
moment. Heterogeneous systems tend to be small with 8cores in total.

Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Qais Yousef <qais.yousef@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Link: https://lkml.kernel.org/r/20200310142219.syxzn5ljpdxqtbgx@e107158-lin.cambridge.arm.com
  • Loading branch information
Qais Yousef authored and Peter Zijlstra committed Mar 20, 2020
1 parent 26c7295 commit e94f80f
Showing 1 changed file with 6 additions and 23 deletions.
29 changes: 6 additions & 23 deletions kernel/sched/cpupri.c
Original file line number Diff line number Diff line change
Expand Up @@ -122,8 +122,7 @@ int cpupri_find_fitness(struct cpupri *cp, struct task_struct *p,
bool (*fitness_fn)(struct task_struct *p, int cpu))
{
int task_pri = convert_prio(p->prio);
int best_unfit_idx = -1;
int idx = 0, cpu;
int idx, cpu;

BUG_ON(task_pri >= CPUPRI_NR_PRIORITIES);

Expand All @@ -145,31 +144,15 @@ int cpupri_find_fitness(struct cpupri *cp, struct task_struct *p,
* If no CPU at the current priority can fit the task
* continue looking
*/
if (cpumask_empty(lowest_mask)) {
/*
* Store our fallback priority in case we
* didn't find a fitting CPU
*/
if (best_unfit_idx == -1)
best_unfit_idx = idx;

if (cpumask_empty(lowest_mask))
continue;
}

return 1;
}

/*
* If we failed to find a fitting lowest_mask, make sure we fall back
* to the last known unfitting lowest_mask.
*
* Note that the map of the recorded idx might have changed since then,
* so we must ensure to do the full dance to make sure that level still
* holds a valid lowest_mask.
*
* As per above, the map could have been concurrently emptied while we
* were busy searching for a fitting lowest_mask at the other priority
* levels.
* If we failed to find a fitting lowest_mask, kick off a new search
* but without taking into account any fitness criteria this time.
*
* This rule favours honouring priority over fitting the task in the
* correct CPU (Capacity Awareness being the only user now).
Expand All @@ -184,8 +167,8 @@ int cpupri_find_fitness(struct cpupri *cp, struct task_struct *p,
* must do proper RT planning to avoid overloading the system if they
* really care.
*/
if (best_unfit_idx != -1)
return __cpupri_find(cp, p, lowest_mask, best_unfit_idx);
if (fitness_fn)
return cpupri_find(cp, p, lowest_mask);

return 0;
}
Expand Down

0 comments on commit e94f80f

Please sign in to comment.