Skip to content

Commit

Permalink
sched/idle: Clear polling before descheduling the idle thread
Browse files Browse the repository at this point in the history
Currently, the only real guarantee provided by the polling bit is
that, if you hold rq->lock and the polling bit is set, then you can
set need_resched to force a reschedule.

The only reason the lock is needed is that the idle thread might not
be running at all when setting its need_resched bit, and rq->lock
keeps it pinned.

This is easy to fix: just clear the polling bit before scheduling.
Now the idle thread's polling bit is only ever set when
rq->curr == rq->idle.

Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: nicolas.pitre@linaro.org
Cc: daniel.lezcano@linaro.org
Cc: umgwanakikbuti@gmail.com
Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: linux-kernel@vger.kernel.org
Link: http://lkml.kernel.org/r/b2059fcb4c613d520cb503b6fad6e47033c7c203.1401902905.git.luto@amacapital.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
  • Loading branch information
Andy Lutomirski authored and Ingo Molnar committed Jun 5, 2014
1 parent dfc68f2 commit 82c65d6
Showing 1 changed file with 25 additions and 1 deletion.
26 changes: 25 additions & 1 deletion kernel/sched/idle.c
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,10 @@ void __weak arch_cpu_idle(void)
* cpuidle_idle_call - the main idle function
*
* NOTE: no locks or semaphores should be used here
*
* On archs that support TIF_POLLING_NRFLAG, is called with polling
* set, and it returns with polling set. If it ever stops polling, it
* must clear the polling bit.
*/
static void cpuidle_idle_call(void)
{
Expand Down Expand Up @@ -175,10 +179,22 @@ static void cpuidle_idle_call(void)

/*
* Generic idle loop implementation
*
* Called with polling cleared.
*/
static void cpu_idle_loop(void)
{
while (1) {
/*
* If the arch has a polling bit, we maintain an invariant:
*
* Our polling bit is clear if we're not scheduled (i.e. if
* rq->curr != rq->idle). This means that, if rq->idle has
* the polling bit set, then setting need_resched is
* guaranteed to cause the cpu to reschedule.
*/

__current_set_polling();
tick_nohz_idle_enter();

while (!need_resched()) {
Expand Down Expand Up @@ -218,6 +234,15 @@ static void cpu_idle_loop(void)
*/
preempt_set_need_resched();
tick_nohz_idle_exit();
__current_clr_polling();

/*
* We promise to reschedule if need_resched is set while
* polling is set. That means that clearing polling
* needs to be visible before rescheduling.
*/
smp_mb__after_atomic();

schedule_preempt_disabled();
}
}
Expand All @@ -239,7 +264,6 @@ void cpu_startup_entry(enum cpuhp_state state)
*/
boot_init_stack_canary();
#endif
__current_set_polling();
arch_cpu_idle_prepare();
cpu_idle_loop();
}

0 comments on commit 82c65d6

Please sign in to comment.