Skip to content

Commit

Permalink
sched: refine negative nice level granularity
Browse files Browse the repository at this point in the history
refine the granularity of negative nice level tasks: let them
reschedule more often to offset the effect of them consuming
their wait_runtime proportionately slower. (This makes nice-0
task scheduling smoother in the presence of negatively
reniced tasks.)

Signed-off-by: Ingo Molnar <mingo@elte.hu>
  • Loading branch information
Ingo Molnar committed Aug 9, 2007
1 parent a69edb5 commit 7cff8cf
Showing 1 changed file with 10 additions and 6 deletions.
16 changes: 10 additions & 6 deletions kernel/sched_fair.c
Original file line number Diff line number Diff line change
Expand Up @@ -222,21 +222,25 @@ niced_granularity(struct sched_entity *curr, unsigned long granularity)
{
u64 tmp;

if (likely(curr->load.weight == NICE_0_LOAD))
return granularity;
/*
* Negative nice levels get the same granularity as nice-0:
* Positive nice levels get the same granularity as nice-0:
*/
if (likely(curr->load.weight >= NICE_0_LOAD))
return granularity;
if (likely(curr->load.weight < NICE_0_LOAD)) {
tmp = curr->load.weight * (u64)granularity;
return (long) (tmp >> NICE_0_SHIFT);
}
/*
* Positive nice level tasks get linearly finer
* Negative nice level tasks get linearly finer
* granularity:
*/
tmp = curr->load.weight * (u64)granularity;
tmp = curr->load.inv_weight * (u64)granularity;

/*
* It will always fit into 'long':
*/
return (long) (tmp >> NICE_0_SHIFT);
return (long) (tmp >> WMULT_SHIFT);
}

static inline void
Expand Down

0 comments on commit 7cff8cf

Please sign in to comment.