Skip to content

Commit

Permalink
cpuidle, sched: Use smp_mb__after_atomic() in current_clr_polling()
Browse files Browse the repository at this point in the history
In architectures that use the polling bit, current_clr_polling() employs
smp_mb() to ensure that the clearing of the polling bit is visible to
other cores before checking TIF_NEED_RESCHED.

However, smp_mb() can be costly. Given that clear_bit() is an atomic
operation, replacing smp_mb() with smp_mb__after_atomic() is appropriate.

Many architectures implement smp_mb__after_atomic() as a lighter-weight
barrier compared to smp_mb(), leading to performance improvements.
For instance, on x86, smp_mb__after_atomic() is a no-op. This change
eliminates a smp_mb() instruction in the cpuidle wake-up path, saving
several CPU cycles and thereby reducing wake-up latency.

Architectures that do not use the polling bit will retain the original
smp_mb() behavior to ensure that existing dependencies remain unaffected.

Signed-off-by: Yujun Dong <yujundong@pascal-lab.net>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20241230141624.155356-1-yujundong@pascal-lab.net
  • Loading branch information
Yujun Dong authored and Ingo Molnar committed Mar 20, 2025
1 parent b521730 commit 3785c7d
Showing 1 changed file with 16 additions and 7 deletions.
23 changes: 16 additions & 7 deletions include/linux/sched/idle.h
Original file line number Diff line number Diff line change
Expand Up @@ -79,6 +79,21 @@ static __always_inline bool __must_check current_clr_polling_and_test(void)
return unlikely(tif_need_resched());
}

static __always_inline void current_clr_polling(void)
{
__current_clr_polling();

/*
* Ensure we check TIF_NEED_RESCHED after we clear the polling bit.
* Once the bit is cleared, we'll get IPIs with every new
* TIF_NEED_RESCHED and the IPI handler, scheduler_ipi(), will also
* fold.
*/
smp_mb__after_atomic(); /* paired with resched_curr() */

preempt_fold_need_resched();
}

#else
static inline void __current_set_polling(void) { }
static inline void __current_clr_polling(void) { }
Expand All @@ -91,21 +106,15 @@ static inline bool __must_check current_clr_polling_and_test(void)
{
return unlikely(tif_need_resched());
}
#endif

static __always_inline void current_clr_polling(void)
{
__current_clr_polling();

/*
* Ensure we check TIF_NEED_RESCHED after we clear the polling bit.
* Once the bit is cleared, we'll get IPIs with every new
* TIF_NEED_RESCHED and the IPI handler, scheduler_ipi(), will also
* fold.
*/
smp_mb(); /* paired with resched_curr() */

preempt_fold_need_resched();
}
#endif

#endif /* _LINUX_SCHED_IDLE_H */

0 comments on commit 3785c7d

Please sign in to comment.