Skip to content

Commit

Permalink
sched: Set skip_clock_update in yield_task_fair()
Browse files Browse the repository at this point in the history
This is another case where we are on our way to schedule(),
so can save a useless clock update and resulting microscopic
vruntime update.

Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1321971686.6855.18.camel@marge.simson.net
Signed-off-by: Ingo Molnar <mingo@elte.hu>
  • Loading branch information
Mike Galbraith authored and Ingo Molnar committed Dec 6, 2011
1 parent 76854c7 commit 916671c
Show file tree
Hide file tree
Showing 2 changed files with 13 additions and 0 deletions.
7 changes: 7 additions & 0 deletions kernel/sched/core.c
Original file line number Diff line number Diff line change
Expand Up @@ -4547,6 +4547,13 @@ bool __sched yield_to(struct task_struct *p, bool preempt)
*/
if (preempt && rq != p_rq)
resched_task(p_rq->curr);
} else {
/*
* We might have set it in task_yield_fair(), but are
* not going to schedule(), so don't want to skip
* the next update.
*/
rq->skip_clock_update = 0;
}

out:
Expand Down
6 changes: 6 additions & 0 deletions kernel/sched/fair.c
Original file line number Diff line number Diff line change
Expand Up @@ -3075,6 +3075,12 @@ static void yield_task_fair(struct rq *rq)
* Update run-time statistics of the 'current'.
*/
update_curr(cfs_rq);
/*
* Tell update_rq_clock() that we've just updated,
* so we don't do microscopic update in schedule()
* and double the fastpath cost.
*/
rq->skip_clock_update = 1;
}

set_skip_buddy(se);
Expand Down

0 comments on commit 916671c

Please sign in to comment.