Skip to content

Commit

Permalink
Merge branches 'cpuinfo.2020.11.06a', 'doc.2020.11.06a', 'fixes.2020.…
Browse files Browse the repository at this point in the history
…11.19b', 'lockdep.2020.11.02a', 'tasks.2020.11.06a' and 'torture.2020.11.06a' into HEAD

cpuinfo.2020.11.06a: Speedups for /proc/cpuinfo.
doc.2020.11.06a: Documentation updates.
fixes.2020.11.19b: Miscellaneous fixes.
lockdep.2020.11.02a: Lockdep-RCU updates to avoid "unused variable".
tasks.2020.11.06a: Tasks-RCU updates.
torture.2020.11.06a': Torture-test updates.
  • Loading branch information
Paul E. McKenney committed Nov 20, 2020
6 parents 3fcd6a2 + c386e29 + 50edb98 + 65e9eb1 + 75dc2da + 01f9e70 commit 7fc91fc
Show file tree
Hide file tree
Showing 45 changed files with 541 additions and 217 deletions.
50 changes: 40 additions & 10 deletions Documentation/RCU/Design/Requirements/Requirements.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1929,16 +1929,46 @@ The Linux-kernel CPU-hotplug implementation has notifiers that are used
to allow the various kernel subsystems (including RCU) to respond
appropriately to a given CPU-hotplug operation. Most RCU operations may
be invoked from CPU-hotplug notifiers, including even synchronous
grace-period operations such as ``synchronize_rcu()`` and
``synchronize_rcu_expedited()``.

However, all-callback-wait operations such as ``rcu_barrier()`` are also
not supported, due to the fact that there are phases of CPU-hotplug
operations where the outgoing CPU's callbacks will not be invoked until
after the CPU-hotplug operation ends, which could also result in
deadlock. Furthermore, ``rcu_barrier()`` blocks CPU-hotplug operations
during its execution, which results in another type of deadlock when
invoked from a CPU-hotplug notifier.
grace-period operations such as (``synchronize_rcu()`` and
``synchronize_rcu_expedited()``). However, these synchronous operations
do block and therefore cannot be invoked from notifiers that execute via
``stop_machine()``, specifically those between the ``CPUHP_AP_OFFLINE``
and ``CPUHP_AP_ONLINE`` states.

In addition, all-callback-wait operations such as ``rcu_barrier()`` may
not be invoked from any CPU-hotplug notifier. This restriction is due
to the fact that there are phases of CPU-hotplug operations where the
outgoing CPU's callbacks will not be invoked until after the CPU-hotplug
operation ends, which could also result in deadlock. Furthermore,
``rcu_barrier()`` blocks CPU-hotplug operations during its execution,
which results in another type of deadlock when invoked from a CPU-hotplug
notifier.

Finally, RCU must avoid deadlocks due to interaction between hotplug,
timers and grace period processing. It does so by maintaining its own set
of books that duplicate the centrally maintained ``cpu_online_mask``,
and also by reporting quiescent states explicitly when a CPU goes
offline. This explicit reporting of quiescent states avoids any need
for the force-quiescent-state loop (FQS) to report quiescent states for
offline CPUs. However, as a debugging measure, the FQS loop does splat
if offline CPUs block an RCU grace period for too long.

An offline CPU's quiescent state will be reported either:

1. As the CPU goes offline using RCU's hotplug notifier (``rcu_report_dead()``).
2. When grace period initialization (``rcu_gp_init()``) detects a
race either with CPU offlining or with a task unblocking on a leaf
``rcu_node`` structure whose CPUs are all offline.

The CPU-online path (``rcu_cpu_starting()``) should never need to report
a quiescent state for an offline CPU. However, as a debugging measure,
it does emit a warning if a quiescent state was not already reported
for that CPU.

During the checking/modification of RCU's hotplug bookkeeping, the
corresponding CPU's leaf node lock is held. This avoids race conditions
between RCU's hotplug notifier hooks, the grace period initialization
code, and the FQS loop, all of which refer to or modify this bookkeeping.

Scheduler and RCU
~~~~~~~~~~~~~~~~~
Expand Down
7 changes: 7 additions & 0 deletions Documentation/RCU/checklist.rst
Original file line number Diff line number Diff line change
Expand Up @@ -314,6 +314,13 @@ over a rather long period of time, but improvements are always welcome!
shared between readers and updaters. Additional primitives
are provided for this case, as discussed in lockdep.txt.

One exception to this rule is when data is only ever added to
the linked data structure, and is never removed during any
time that readers might be accessing that structure. In such
cases, READ_ONCE() may be used in place of rcu_dereference()
and the read-side markers (rcu_read_lock() and rcu_read_unlock(),
for example) may be omitted.

10. Conversely, if you are in an RCU read-side critical section,
and you don't hold the appropriate update-side lock, you -must-
use the "_rcu()" variants of the list macros. Failing to do so
Expand Down
6 changes: 6 additions & 0 deletions Documentation/RCU/rcu_dereference.rst
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,12 @@ Follow these rules to keep your RCU code working properly:
for an example where the compiler can in fact deduce the exact
value of the pointer, and thus cause misordering.

- In the special case where data is added but is never removed
while readers are accessing the structure, READ_ONCE() may be used
instead of rcu_dereference(). In this case, use of READ_ONCE()
takes on the role of the lockless_dereference() primitive that
was removed in v4.15.

- You are only permitted to use rcu_dereference on pointer values.
The compiler simply knows too much about integral values to
trust it to carry dependencies through integer operations.
Expand Down
3 changes: 1 addition & 2 deletions Documentation/RCU/whatisRCU.rst
Original file line number Diff line number Diff line change
Expand Up @@ -497,8 +497,7 @@ long -- there might be other high-priority work to be done.
In such cases, one uses call_rcu() rather than synchronize_rcu().
The call_rcu() API is as follows::

void call_rcu(struct rcu_head * head,
void (*func)(struct rcu_head *head));
void call_rcu(struct rcu_head *head, rcu_callback_t func);

This function invokes func(head) after a grace period has elapsed.
This invocation might happen from either softirq or process context,
Expand Down
2 changes: 0 additions & 2 deletions arch/x86/kernel/cpu/mtrr/mtrr.c
Original file line number Diff line number Diff line change
Expand Up @@ -794,8 +794,6 @@ void mtrr_ap_init(void)
if (!use_intel() || mtrr_aps_delayed_init)
return;

rcu_cpu_starting(smp_processor_id());

/*
* Ideally we should hold mtrr_mutex here to avoid mtrr entries
* changed, but this routine will be called in cpu boot time,
Expand Down
1 change: 1 addition & 0 deletions arch/x86/kernel/smpboot.c
Original file line number Diff line number Diff line change
Expand Up @@ -229,6 +229,7 @@ static void notrace start_secondary(void *unused)
#endif
cpu_init_exception_handling();
cpu_init();
rcu_cpu_starting(raw_smp_processor_id());
x86_cpuinit.early_percpu_clock_init();
preempt_disable();
smp_callin();
Expand Down
1 change: 1 addition & 0 deletions include/linux/kernel.h
Original file line number Diff line number Diff line change
Expand Up @@ -536,6 +536,7 @@ extern int panic_on_warn;
extern unsigned long panic_on_taint;
extern bool panic_on_taint_nousertaint;
extern int sysctl_panic_on_rcu_stall;
extern int sysctl_max_rcu_stall_to_panic;
extern int sysctl_panic_on_stackoverflow;

extern bool crash_kexec_post_notifiers;
Expand Down
2 changes: 1 addition & 1 deletion include/linux/list.h
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
#include <linux/kernel.h>

/*
* Simple doubly linked list implementation.
* Circular doubly linked list implementation.
*
* Some of the internal functions ("__xxx") are useful when
* manipulating whole lists rather than single entries, as
Expand Down
6 changes: 6 additions & 0 deletions include/linux/lockdep.h
Original file line number Diff line number Diff line change
Expand Up @@ -375,6 +375,12 @@ static inline void lockdep_unregister_key(struct lock_class_key *key)

#define lockdep_depth(tsk) (0)

/*
* Dummy forward declarations, allow users to write less ifdef-y code
* and depend on dead code elimination.
*/
extern int lock_is_held(const void *);
extern int lockdep_is_held(const void *);
#define lockdep_is_held_type(l, r) (1)

#define lockdep_assert_held(l) do { (void)(l); } while (0)
Expand Down
11 changes: 6 additions & 5 deletions include/linux/rcupdate.h
Original file line number Diff line number Diff line change
Expand Up @@ -241,6 +241,11 @@ bool rcu_lockdep_current_cpu_online(void);
static inline bool rcu_lockdep_current_cpu_online(void) { return true; }
#endif /* #else #if defined(CONFIG_HOTPLUG_CPU) && defined(CONFIG_PROVE_RCU) */

extern struct lockdep_map rcu_lock_map;
extern struct lockdep_map rcu_bh_lock_map;
extern struct lockdep_map rcu_sched_lock_map;
extern struct lockdep_map rcu_callback_map;

#ifdef CONFIG_DEBUG_LOCK_ALLOC

static inline void rcu_lock_acquire(struct lockdep_map *map)
Expand All @@ -253,10 +258,6 @@ static inline void rcu_lock_release(struct lockdep_map *map)
lock_release(map, _THIS_IP_);
}

extern struct lockdep_map rcu_lock_map;
extern struct lockdep_map rcu_bh_lock_map;
extern struct lockdep_map rcu_sched_lock_map;
extern struct lockdep_map rcu_callback_map;
int debug_lockdep_rcu_enabled(void);
int rcu_read_lock_held(void);
int rcu_read_lock_bh_held(void);
Expand Down Expand Up @@ -327,7 +328,7 @@ static inline void rcu_preempt_sleep_check(void) { }

#else /* #ifdef CONFIG_PROVE_RCU */

#define RCU_LOCKDEP_WARN(c, s) do { } while (0)
#define RCU_LOCKDEP_WARN(c, s) do { } while (0 && (c))
#define rcu_sleep_check() do { } while (0)

#endif /* #else #ifdef CONFIG_PROVE_RCU */
Expand Down
4 changes: 2 additions & 2 deletions include/linux/rcupdate_trace.h
Original file line number Diff line number Diff line change
Expand Up @@ -11,10 +11,10 @@
#include <linux/sched.h>
#include <linux/rcupdate.h>

#ifdef CONFIG_DEBUG_LOCK_ALLOC

extern struct lockdep_map rcu_trace_lock_map;

#ifdef CONFIG_DEBUG_LOCK_ALLOC

static inline int rcu_read_lock_trace_held(void)
{
return lock_is_held(&rcu_trace_lock_map);
Expand Down
2 changes: 0 additions & 2 deletions include/linux/sched/task.h
Original file line number Diff line number Diff line change
Expand Up @@ -47,9 +47,7 @@ extern spinlock_t mmlist_lock;
extern union thread_union init_thread_union;
extern struct task_struct init_task;

#ifdef CONFIG_PROVE_RCU
extern int lockdep_tasklist_lock_is_held(void);
#endif /* #ifdef CONFIG_PROVE_RCU */

extern asmlinkage void schedule_tail(struct task_struct *prev);
extern void init_idle(struct task_struct *idle, int cpu);
Expand Down
12 changes: 0 additions & 12 deletions include/net/sch_generic.h
Original file line number Diff line number Diff line change
Expand Up @@ -435,7 +435,6 @@ struct tcf_block {
struct mutex proto_destroy_lock; /* Lock for proto_destroy hashtable. */
};

#ifdef CONFIG_PROVE_LOCKING
static inline bool lockdep_tcf_chain_is_locked(struct tcf_chain *chain)
{
return lockdep_is_held(&chain->filter_chain_lock);
Expand All @@ -445,17 +444,6 @@ static inline bool lockdep_tcf_proto_is_locked(struct tcf_proto *tp)
{
return lockdep_is_held(&tp->lock);
}
#else
static inline bool lockdep_tcf_chain_is_locked(struct tcf_block *chain)
{
return true;
}

static inline bool lockdep_tcf_proto_is_locked(struct tcf_proto *tp)
{
return true;
}
#endif /* #ifdef CONFIG_PROVE_LOCKING */

#define tcf_chain_dereference(p, chain) \
rcu_dereference_protected(p, lockdep_tcf_chain_is_locked(chain))
Expand Down
2 changes: 0 additions & 2 deletions include/net/sock.h
Original file line number Diff line number Diff line change
Expand Up @@ -1566,13 +1566,11 @@ do { \
lockdep_init_map(&(sk)->sk_lock.dep_map, (name), (key), 0); \
} while (0)

#ifdef CONFIG_LOCKDEP
static inline bool lockdep_sock_is_held(const struct sock *sk)
{
return lockdep_is_held(&sk->sk_lock) ||
lockdep_is_held(&sk->sk_lock.slock);
}
#endif

void lock_sock_nested(struct sock *sk, int subclass);

Expand Down
36 changes: 30 additions & 6 deletions kernel/locking/locktorture.c
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@
#include <linux/slab.h>
#include <linux/percpu-rwsem.h>
#include <linux/torture.h>
#include <linux/reboot.h>

MODULE_LICENSE("GPL");
MODULE_AUTHOR("Paul E. McKenney <paulmck@linux.ibm.com>");
Expand Down Expand Up @@ -60,6 +61,7 @@ static struct task_struct **reader_tasks;

static bool lock_is_write_held;
static bool lock_is_read_held;
static unsigned long last_lock_release;

struct lock_stress_stats {
long n_lock_fail;
Expand All @@ -74,6 +76,7 @@ static void lock_torture_cleanup(void);
*/
struct lock_torture_ops {
void (*init)(void);
void (*exit)(void);
int (*writelock)(void);
void (*write_delay)(struct torture_random_state *trsp);
void (*task_boost)(struct torture_random_state *trsp);
Expand All @@ -90,12 +93,13 @@ struct lock_torture_cxt {
int nrealwriters_stress;
int nrealreaders_stress;
bool debug_lock;
bool init_called;
atomic_t n_lock_torture_errors;
struct lock_torture_ops *cur_ops;
struct lock_stress_stats *lwsa; /* writer statistics */
struct lock_stress_stats *lrsa; /* reader statistics */
};
static struct lock_torture_cxt cxt = { 0, 0, false,
static struct lock_torture_cxt cxt = { 0, 0, false, false,
ATOMIC_INIT(0),
NULL, NULL};
/*
Expand Down Expand Up @@ -571,6 +575,11 @@ static void torture_percpu_rwsem_init(void)
BUG_ON(percpu_init_rwsem(&pcpu_rwsem));
}

static void torture_percpu_rwsem_exit(void)
{
percpu_free_rwsem(&pcpu_rwsem);
}

static int torture_percpu_rwsem_down_write(void) __acquires(pcpu_rwsem)
{
percpu_down_write(&pcpu_rwsem);
Expand All @@ -595,6 +604,7 @@ static void torture_percpu_rwsem_up_read(void) __releases(pcpu_rwsem)

static struct lock_torture_ops percpu_rwsem_lock_ops = {
.init = torture_percpu_rwsem_init,
.exit = torture_percpu_rwsem_exit,
.writelock = torture_percpu_rwsem_down_write,
.write_delay = torture_rwsem_write_delay,
.task_boost = torture_boost_dummy,
Expand Down Expand Up @@ -632,6 +642,7 @@ static int lock_torture_writer(void *arg)
lwsp->n_lock_acquired++;
cxt.cur_ops->write_delay(&rand);
lock_is_write_held = false;
WRITE_ONCE(last_lock_release, jiffies);
cxt.cur_ops->writeunlock();

stutter_wait("lock_torture_writer");
Expand Down Expand Up @@ -786,9 +797,10 @@ static void lock_torture_cleanup(void)

/*
* Indicates early cleanup, meaning that the test has not run,
* such as when passing bogus args when loading the module. As
* such, only perform the underlying torture-specific cleanups,
* and avoid anything related to locktorture.
* such as when passing bogus args when loading the module.
* However cxt->cur_ops.init() may have been invoked, so beside
* perform the underlying torture-specific cleanups, cur_ops.exit()
* will be invoked if needed.
*/
if (!cxt.lwsa && !cxt.lrsa)
goto end;
Expand Down Expand Up @@ -828,6 +840,11 @@ static void lock_torture_cleanup(void)
cxt.lrsa = NULL;

end:
if (cxt.init_called) {
if (cxt.cur_ops->exit)
cxt.cur_ops->exit();
cxt.init_called = false;
}
torture_cleanup_end();
}

Expand Down Expand Up @@ -868,14 +885,17 @@ static int __init lock_torture_init(void)
goto unwind;
}

if (nwriters_stress == 0 && nreaders_stress == 0) {
if (nwriters_stress == 0 &&
(!cxt.cur_ops->readlock || nreaders_stress == 0)) {
pr_alert("lock-torture: must run at least one locking thread\n");
firsterr = -EINVAL;
goto unwind;
}

if (cxt.cur_ops->init)
if (cxt.cur_ops->init) {
cxt.cur_ops->init();
cxt.init_called = true;
}

if (nwriters_stress >= 0)
cxt.nrealwriters_stress = nwriters_stress;
Expand Down Expand Up @@ -1038,6 +1058,10 @@ static int __init lock_torture_init(void)
unwind:
torture_init_end();
lock_torture_cleanup();
if (shutdown_secs) {
WARN_ON(!IS_MODULE(CONFIG_LOCK_TORTURE_TEST));
kernel_power_off();
}
return firsterr;
}

Expand Down
20 changes: 12 additions & 8 deletions kernel/rcu/Kconfig
Original file line number Diff line number Diff line change
Expand Up @@ -221,19 +221,23 @@ config RCU_NOCB_CPU
Use this option to reduce OS jitter for aggressive HPC or
real-time workloads. It can also be used to offload RCU
callback invocation to energy-efficient CPUs in battery-powered
asymmetric multiprocessors.
asymmetric multiprocessors. The price of this reduced jitter
is that the overhead of call_rcu() increases and that some
workloads will incur significant increases in context-switch
rates.

This option offloads callback invocation from the set of CPUs
specified at boot time by the rcu_nocbs parameter. For each
such CPU, a kthread ("rcuox/N") will be created to invoke
callbacks, where the "N" is the CPU being offloaded, and where
the "p" for RCU-preempt (PREEMPTION kernels) and "s" for RCU-sched
(!PREEMPTION kernels). Nothing prevents this kthread from running
on the specified CPUs, but (1) the kthreads may be preempted
between each callback, and (2) affinity or cgroups can be used
to force the kthreads to run on whatever set of CPUs is desired.

Say Y here if you want to help to debug reduced OS jitter.
the "x" is "p" for RCU-preempt (PREEMPTION kernels) and "s" for
RCU-sched (!PREEMPTION kernels). Nothing prevents this kthread
from running on the specified CPUs, but (1) the kthreads may be
preempted between each callback, and (2) affinity or cgroups can
be used to force the kthreads to run on whatever set of CPUs is
desired.

Say Y here if you need reduced OS jitter, despite added overhead.
Say N here if you are unsure.

config TASKS_TRACE_RCU_READ_MB
Expand Down
Loading

0 comments on commit 7fc91fc

Please sign in to comment.