Skip to content

sched-rt-2022-10-05

tagged this 05 Oct 12:08
 Introduce preempt_[dis|enable_nested() and use it to clean up
 various places which have open coded PREEMPT_RT conditionals.

 On PREEMPT_RT enabled kernels, spinlocks and rwlocks are neither disabling
 preemption nor interrupts. Though there are a few places which depend on
 the implicit preemption/interrupt disable of those locks, e.g. seqcount
 write sections, per CPU statistics updates etc.

 PREEMPT_RT added open coded CONFIG_PREEMPT_RT conditionals to
 disable/enable preemption in the related code parts all over the
 place. That's hard to read and does not really explain why this is
 necessary.

 Linus suggested to use helper functions (preempt_disable_nested() and
 preempt_enable_nested()) and use those in the affected places. On !RT
 enabled kernels these functions are NOPs, but contain a lockdep assert to
 validate that preemption is actually disabled to catch call sites which
 do not have preemption disabled.

 Clean up the affected code paths in mm, dentry and lib.
Assets 2
Loading