Skip to content

Commit

Permalink
Merge tag 'core-rcu-2020-12-14' of git://git.kernel.org/pub/scm/linux…
Browse files Browse the repository at this point in the history
…/kernel/git/tip/tip

Pull RCU updates from Thomas Gleixner:
 "RCU, LKMM and KCSAN updates collected by Paul McKenney.

  RCU:
   - Avoid cpuinfo-induced IPI pileups and idle-CPU IPIs

   - Lockdep-RCU updates reducing the need for __maybe_unused

   - Tasks-RCU updates

   - Miscellaneous fixes

   - Documentation updates

   - Torture-test updates

  KCSAN:
   - updates for selftests, avoiding setting watchpoints on NULL pointers

   - fix to watchpoint encoding

  LKMM:
   - updates for documentation along with some updates to example-code
     litmus tests"

* tag 'core-rcu-2020-12-14' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (72 commits)
  srcu: Take early exit on memory-allocation failure
  rcu/tree: Defer kvfree_rcu() allocation to a clean context
  rcu: Do not report strict GPs for outgoing CPUs
  rcu: Fix a typo in rcu_blocking_is_gp() header comment
  rcu: Prevent lockdep-RCU splats on lock acquisition/release
  rcu/tree: nocb: Avoid raising softirq for offloaded ready-to-execute CBs
  rcu,ftrace: Fix ftrace recursion
  rcu/tree: Make struct kernel_param_ops definitions const
  rcu/tree: Add a warning if CPU being onlined did not report QS already
  rcu: Clarify nocb kthreads naming in RCU_NOCB_CPU config
  rcu: Fix single-CPU check in rcu_blocking_is_gp()
  rcu: Implement rcu_segcblist_is_offloaded() config dependent
  list.h: Update comment to explicitly note circular lists
  rcu: Panic after fixed number of stalls
  x86/smpboot:  Move rcu_cpu_starting() earlier
  rcu: Allow rcu_irq_enter_check_tick() from NMI
  tools/memory-model: Label MP tests' producers and consumers
  tools/memory-model: Use "buf" and "flag" for message-passing tests
  tools/memory-model: Add types to litmus tests
  tools/memory-model: Add a glossary of LKMM terms
  ...
  • Loading branch information
Linus Torvalds committed Dec 15, 2020
2 parents 1ac0884 + 50df51d commit 8c1dccc
Show file tree
Hide file tree
Showing 89 changed files with 1,822 additions and 320 deletions.
50 changes: 40 additions & 10 deletions Documentation/RCU/Design/Requirements/Requirements.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1929,16 +1929,46 @@ The Linux-kernel CPU-hotplug implementation has notifiers that are used
to allow the various kernel subsystems (including RCU) to respond
appropriately to a given CPU-hotplug operation. Most RCU operations may
be invoked from CPU-hotplug notifiers, including even synchronous
grace-period operations such as ``synchronize_rcu()`` and
``synchronize_rcu_expedited()``.

However, all-callback-wait operations such as ``rcu_barrier()`` are also
not supported, due to the fact that there are phases of CPU-hotplug
operations where the outgoing CPU's callbacks will not be invoked until
after the CPU-hotplug operation ends, which could also result in
deadlock. Furthermore, ``rcu_barrier()`` blocks CPU-hotplug operations
during its execution, which results in another type of deadlock when
invoked from a CPU-hotplug notifier.
grace-period operations such as (``synchronize_rcu()`` and
``synchronize_rcu_expedited()``). However, these synchronous operations
do block and therefore cannot be invoked from notifiers that execute via
``stop_machine()``, specifically those between the ``CPUHP_AP_OFFLINE``
and ``CPUHP_AP_ONLINE`` states.

In addition, all-callback-wait operations such as ``rcu_barrier()`` may
not be invoked from any CPU-hotplug notifier. This restriction is due
to the fact that there are phases of CPU-hotplug operations where the
outgoing CPU's callbacks will not be invoked until after the CPU-hotplug
operation ends, which could also result in deadlock. Furthermore,
``rcu_barrier()`` blocks CPU-hotplug operations during its execution,
which results in another type of deadlock when invoked from a CPU-hotplug
notifier.

Finally, RCU must avoid deadlocks due to interaction between hotplug,
timers and grace period processing. It does so by maintaining its own set
of books that duplicate the centrally maintained ``cpu_online_mask``,
and also by reporting quiescent states explicitly when a CPU goes
offline. This explicit reporting of quiescent states avoids any need
for the force-quiescent-state loop (FQS) to report quiescent states for
offline CPUs. However, as a debugging measure, the FQS loop does splat
if offline CPUs block an RCU grace period for too long.

An offline CPU's quiescent state will be reported either:

1. As the CPU goes offline using RCU's hotplug notifier (``rcu_report_dead()``).
2. When grace period initialization (``rcu_gp_init()``) detects a
race either with CPU offlining or with a task unblocking on a leaf
``rcu_node`` structure whose CPUs are all offline.

The CPU-online path (``rcu_cpu_starting()``) should never need to report
a quiescent state for an offline CPU. However, as a debugging measure,
it does emit a warning if a quiescent state was not already reported
for that CPU.

During the checking/modification of RCU's hotplug bookkeeping, the
corresponding CPU's leaf node lock is held. This avoids race conditions
between RCU's hotplug notifier hooks, the grace period initialization
code, and the FQS loop, all of which refer to or modify this bookkeeping.

Scheduler and RCU
~~~~~~~~~~~~~~~~~
Expand Down
7 changes: 7 additions & 0 deletions Documentation/RCU/checklist.rst
Original file line number Diff line number Diff line change
Expand Up @@ -314,6 +314,13 @@ over a rather long period of time, but improvements are always welcome!
shared between readers and updaters. Additional primitives
are provided for this case, as discussed in lockdep.txt.

One exception to this rule is when data is only ever added to
the linked data structure, and is never removed during any
time that readers might be accessing that structure. In such
cases, READ_ONCE() may be used in place of rcu_dereference()
and the read-side markers (rcu_read_lock() and rcu_read_unlock(),
for example) may be omitted.

10. Conversely, if you are in an RCU read-side critical section,
and you don't hold the appropriate update-side lock, you -must-
use the "_rcu()" variants of the list macros. Failing to do so
Expand Down
6 changes: 6 additions & 0 deletions Documentation/RCU/rcu_dereference.rst
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,12 @@ Follow these rules to keep your RCU code working properly:
for an example where the compiler can in fact deduce the exact
value of the pointer, and thus cause misordering.

- In the special case where data is added but is never removed
while readers are accessing the structure, READ_ONCE() may be used
instead of rcu_dereference(). In this case, use of READ_ONCE()
takes on the role of the lockless_dereference() primitive that
was removed in v4.15.

- You are only permitted to use rcu_dereference on pointer values.
The compiler simply knows too much about integral values to
trust it to carry dependencies through integer operations.
Expand Down
3 changes: 1 addition & 2 deletions Documentation/RCU/whatisRCU.rst
Original file line number Diff line number Diff line change
Expand Up @@ -497,8 +497,7 @@ long -- there might be other high-priority work to be done.
In such cases, one uses call_rcu() rather than synchronize_rcu().
The call_rcu() API is as follows::

void call_rcu(struct rcu_head * head,
void (*func)(struct rcu_head *head));
void call_rcu(struct rcu_head *head, rcu_callback_t func);

This function invokes func(head) after a grace period has elapsed.
This invocation might happen from either softirq or process context,
Expand Down
2 changes: 1 addition & 1 deletion Documentation/memory-barriers.txt
Original file line number Diff line number Diff line change
Expand Up @@ -1870,7 +1870,7 @@ There are some more advanced barrier functions:

These are for use with atomic RMW functions that do not imply memory
barriers, but where the code needs a memory barrier. Examples for atomic
RMW functions that do not imply are memory barrier are e.g. add,
RMW functions that do not imply a memory barrier are e.g. add,
subtract, (failed) conditional operations, _relaxed functions,
but not atomic_read or atomic_set. A common example where a memory
barrier may be required is when atomic ops are used for reference
Expand Down
16 changes: 15 additions & 1 deletion arch/x86/kernel/cpu/aperfmperf.c
Original file line number Diff line number Diff line change
Expand Up @@ -14,11 +14,13 @@
#include <linux/cpufreq.h>
#include <linux/smp.h>
#include <linux/sched/isolation.h>
#include <linux/rcupdate.h>

#include "cpu.h"

struct aperfmperf_sample {
unsigned int khz;
atomic_t scfpending;
ktime_t time;
u64 aperf;
u64 mperf;
Expand Down Expand Up @@ -62,17 +64,20 @@ static void aperfmperf_snapshot_khz(void *dummy)
s->aperf = aperf;
s->mperf = mperf;
s->khz = div64_u64((cpu_khz * aperf_delta), mperf_delta);
atomic_set_release(&s->scfpending, 0);
}

static bool aperfmperf_snapshot_cpu(int cpu, ktime_t now, bool wait)
{
s64 time_delta = ktime_ms_delta(now, per_cpu(samples.time, cpu));
struct aperfmperf_sample *s = per_cpu_ptr(&samples, cpu);

/* Don't bother re-computing within the cache threshold time. */
if (time_delta < APERFMPERF_CACHE_THRESHOLD_MS)
return true;

smp_call_function_single(cpu, aperfmperf_snapshot_khz, NULL, wait);
if (!atomic_xchg(&s->scfpending, 1) || wait)
smp_call_function_single(cpu, aperfmperf_snapshot_khz, NULL, wait);

/* Return false if the previous iteration was too long ago. */
return time_delta <= APERFMPERF_STALE_THRESHOLD_MS;
Expand All @@ -89,6 +94,9 @@ unsigned int aperfmperf_get_khz(int cpu)
if (!housekeeping_cpu(cpu, HK_FLAG_MISC))
return 0;

if (rcu_is_idle_cpu(cpu))
return 0; /* Idle CPUs are completely uninteresting. */

aperfmperf_snapshot_cpu(cpu, ktime_get(), true);
return per_cpu(samples.khz, cpu);
}
Expand All @@ -108,6 +116,8 @@ void arch_freq_prepare_all(void)
for_each_online_cpu(cpu) {
if (!housekeeping_cpu(cpu, HK_FLAG_MISC))
continue;
if (rcu_is_idle_cpu(cpu))
continue; /* Idle CPUs are completely uninteresting. */
if (!aperfmperf_snapshot_cpu(cpu, now, false))
wait = true;
}
Expand All @@ -118,6 +128,8 @@ void arch_freq_prepare_all(void)

unsigned int arch_freq_get_on_cpu(int cpu)
{
struct aperfmperf_sample *s = per_cpu_ptr(&samples, cpu);

if (!cpu_khz)
return 0;

Expand All @@ -131,6 +143,8 @@ unsigned int arch_freq_get_on_cpu(int cpu)
return per_cpu(samples.khz, cpu);

msleep(APERFMPERF_REFRESH_DELAY_MS);
atomic_set(&s->scfpending, 1);
smp_mb(); /* ->scfpending before smp_call_function_single(). */
smp_call_function_single(cpu, aperfmperf_snapshot_khz, NULL, 1);

return per_cpu(samples.khz, cpu);
Expand Down
2 changes: 0 additions & 2 deletions arch/x86/kernel/cpu/mtrr/mtrr.c
Original file line number Diff line number Diff line change
Expand Up @@ -794,8 +794,6 @@ void mtrr_ap_init(void)
if (!use_intel() || mtrr_aps_delayed_init)
return;

rcu_cpu_starting(smp_processor_id());

/*
* Ideally we should hold mtrr_mutex here to avoid mtrr entries
* changed, but this routine will be called in cpu boot time,
Expand Down
1 change: 1 addition & 0 deletions arch/x86/kernel/smpboot.c
Original file line number Diff line number Diff line change
Expand Up @@ -229,6 +229,7 @@ static void notrace start_secondary(void *unused)
#endif
cpu_init_exception_handling();
cpu_init();
rcu_cpu_starting(raw_smp_processor_id());
x86_cpuinit.early_percpu_clock_init();
preempt_disable();
smp_callin();
Expand Down
1 change: 1 addition & 0 deletions include/linux/kernel.h
Original file line number Diff line number Diff line change
Expand Up @@ -536,6 +536,7 @@ extern int panic_on_warn;
extern unsigned long panic_on_taint;
extern bool panic_on_taint_nousertaint;
extern int sysctl_panic_on_rcu_stall;
extern int sysctl_max_rcu_stall_to_panic;
extern int sysctl_panic_on_stackoverflow;

extern bool crash_kexec_post_notifiers;
Expand Down
2 changes: 1 addition & 1 deletion include/linux/list.h
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
#include <linux/kernel.h>

/*
* Simple doubly linked list implementation.
* Circular doubly linked list implementation.
*
* Some of the internal functions ("__xxx") are useful when
* manipulating whole lists rather than single entries, as
Expand Down
6 changes: 6 additions & 0 deletions include/linux/lockdep.h
Original file line number Diff line number Diff line change
Expand Up @@ -375,6 +375,12 @@ static inline void lockdep_unregister_key(struct lock_class_key *key)

#define lockdep_depth(tsk) (0)

/*
* Dummy forward declarations, allow users to write less ifdef-y code
* and depend on dead code elimination.
*/
extern int lock_is_held(const void *);
extern int lockdep_is_held(const void *);
#define lockdep_is_held_type(l, r) (1)

#define lockdep_assert_held(l) do { (void)(l); } while (0)
Expand Down
11 changes: 6 additions & 5 deletions include/linux/rcupdate.h
Original file line number Diff line number Diff line change
Expand Up @@ -241,6 +241,11 @@ bool rcu_lockdep_current_cpu_online(void);
static inline bool rcu_lockdep_current_cpu_online(void) { return true; }
#endif /* #else #if defined(CONFIG_HOTPLUG_CPU) && defined(CONFIG_PROVE_RCU) */

extern struct lockdep_map rcu_lock_map;
extern struct lockdep_map rcu_bh_lock_map;
extern struct lockdep_map rcu_sched_lock_map;
extern struct lockdep_map rcu_callback_map;

#ifdef CONFIG_DEBUG_LOCK_ALLOC

static inline void rcu_lock_acquire(struct lockdep_map *map)
Expand All @@ -253,10 +258,6 @@ static inline void rcu_lock_release(struct lockdep_map *map)
lock_release(map, _THIS_IP_);
}

extern struct lockdep_map rcu_lock_map;
extern struct lockdep_map rcu_bh_lock_map;
extern struct lockdep_map rcu_sched_lock_map;
extern struct lockdep_map rcu_callback_map;
int debug_lockdep_rcu_enabled(void);
int rcu_read_lock_held(void);
int rcu_read_lock_bh_held(void);
Expand Down Expand Up @@ -327,7 +328,7 @@ static inline void rcu_preempt_sleep_check(void) { }

#else /* #ifdef CONFIG_PROVE_RCU */

#define RCU_LOCKDEP_WARN(c, s) do { } while (0)
#define RCU_LOCKDEP_WARN(c, s) do { } while (0 && (c))
#define rcu_sleep_check() do { } while (0)

#endif /* #else #ifdef CONFIG_PROVE_RCU */
Expand Down
4 changes: 2 additions & 2 deletions include/linux/rcupdate_trace.h
Original file line number Diff line number Diff line change
Expand Up @@ -11,10 +11,10 @@
#include <linux/sched.h>
#include <linux/rcupdate.h>

#ifdef CONFIG_DEBUG_LOCK_ALLOC

extern struct lockdep_map rcu_trace_lock_map;

#ifdef CONFIG_DEBUG_LOCK_ALLOC

static inline int rcu_read_lock_trace_held(void)
{
return lock_is_held(&rcu_trace_lock_map);
Expand Down
2 changes: 2 additions & 0 deletions include/linux/rcutiny.h
Original file line number Diff line number Diff line change
Expand Up @@ -89,6 +89,8 @@ static inline void rcu_irq_enter_irqson(void) { }
static inline void rcu_irq_exit(void) { }
static inline void rcu_irq_exit_preempt(void) { }
static inline void rcu_irq_exit_check_preempt(void) { }
#define rcu_is_idle_cpu(cpu) \
(is_idle_task(current) && !in_nmi() && !in_irq() && !in_serving_softirq())
static inline void exit_rcu(void) { }
static inline bool rcu_preempt_need_deferred_qs(struct task_struct *t)
{
Expand Down
1 change: 1 addition & 0 deletions include/linux/rcutree.h
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,7 @@ void rcu_irq_exit(void);
void rcu_irq_exit_preempt(void);
void rcu_irq_enter_irqson(void);
void rcu_irq_exit_irqson(void);
bool rcu_is_idle_cpu(int cpu);

#ifdef CONFIG_PROVE_RCU
void rcu_irq_exit_check_preempt(void);
Expand Down
2 changes: 0 additions & 2 deletions include/linux/sched/task.h
Original file line number Diff line number Diff line change
Expand Up @@ -47,9 +47,7 @@ extern spinlock_t mmlist_lock;
extern union thread_union init_thread_union;
extern struct task_struct init_task;

#ifdef CONFIG_PROVE_RCU
extern int lockdep_tasklist_lock_is_held(void);
#endif /* #ifdef CONFIG_PROVE_RCU */

extern asmlinkage void schedule_tail(struct task_struct *prev);
extern void init_idle(struct task_struct *idle, int cpu);
Expand Down
12 changes: 0 additions & 12 deletions include/net/sch_generic.h
Original file line number Diff line number Diff line change
Expand Up @@ -435,7 +435,6 @@ struct tcf_block {
struct mutex proto_destroy_lock; /* Lock for proto_destroy hashtable. */
};

#ifdef CONFIG_PROVE_LOCKING
static inline bool lockdep_tcf_chain_is_locked(struct tcf_chain *chain)
{
return lockdep_is_held(&chain->filter_chain_lock);
Expand All @@ -445,17 +444,6 @@ static inline bool lockdep_tcf_proto_is_locked(struct tcf_proto *tp)
{
return lockdep_is_held(&tp->lock);
}
#else
static inline bool lockdep_tcf_chain_is_locked(struct tcf_block *chain)
{
return true;
}

static inline bool lockdep_tcf_proto_is_locked(struct tcf_proto *tp)
{
return true;
}
#endif /* #ifdef CONFIG_PROVE_LOCKING */

#define tcf_chain_dereference(p, chain) \
rcu_dereference_protected(p, lockdep_tcf_chain_is_locked(chain))
Expand Down
2 changes: 0 additions & 2 deletions include/net/sock.h
Original file line number Diff line number Diff line change
Expand Up @@ -1566,13 +1566,11 @@ do { \
lockdep_init_map(&(sk)->sk_lock.dep_map, (name), (key), 0); \
} while (0)

#ifdef CONFIG_LOCKDEP
static inline bool lockdep_sock_is_held(const struct sock *sk)
{
return lockdep_is_held(&sk->sk_lock) ||
lockdep_is_held(&sk->sk_lock.slock);
}
#endif

void lock_sock_nested(struct sock *sk, int subclass);

Expand Down
20 changes: 11 additions & 9 deletions kernel/kcsan/encoding.h
Original file line number Diff line number Diff line change
Expand Up @@ -37,18 +37,20 @@
*/
#define WATCHPOINT_ADDR_BITS (BITS_PER_LONG-1 - WATCHPOINT_SIZE_BITS)

/*
* Masks to set/retrieve the encoded data.
*/
#define WATCHPOINT_WRITE_MASK BIT(BITS_PER_LONG-1)
#define WATCHPOINT_SIZE_MASK \
GENMASK(BITS_PER_LONG-2, BITS_PER_LONG-2 - WATCHPOINT_SIZE_BITS)
#define WATCHPOINT_ADDR_MASK \
GENMASK(BITS_PER_LONG-3 - WATCHPOINT_SIZE_BITS, 0)
/* Bitmasks for the encoded watchpoint access information. */
#define WATCHPOINT_WRITE_MASK BIT(BITS_PER_LONG-1)
#define WATCHPOINT_SIZE_MASK GENMASK(BITS_PER_LONG-2, WATCHPOINT_ADDR_BITS)
#define WATCHPOINT_ADDR_MASK GENMASK(WATCHPOINT_ADDR_BITS-1, 0)
static_assert(WATCHPOINT_ADDR_MASK == (1UL << WATCHPOINT_ADDR_BITS) - 1);
static_assert((WATCHPOINT_WRITE_MASK ^ WATCHPOINT_SIZE_MASK ^ WATCHPOINT_ADDR_MASK) == ~0UL);

static inline bool check_encodable(unsigned long addr, size_t size)
{
return size <= MAX_ENCODABLE_SIZE;
/*
* While we can encode addrs<PAGE_SIZE, avoid crashing with a NULL
* pointer deref inside KCSAN.
*/
return addr >= PAGE_SIZE && size <= MAX_ENCODABLE_SIZE;
}

static inline long
Expand Down
3 changes: 3 additions & 0 deletions kernel/kcsan/selftest.c
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,9 @@ static bool test_encode_decode(void)
unsigned long addr;

prandom_bytes(&addr, sizeof(addr));
if (addr < PAGE_SIZE)
addr = PAGE_SIZE;

if (WARN_ON(!check_encodable(addr, size)))
return false;

Expand Down
Loading

0 comments on commit 8c1dccc

Please sign in to comment.