Skip to content

Commit

Permalink
Merge branch 'linus' into perf/urgent, to synchronize with upstream
Browse files Browse the repository at this point in the history
Signed-off-by: Ingo Molnar <mingo@kernel.org>
  • Loading branch information
Ingo Molnar committed Feb 5, 2020
2 parents f1ec3a5 + b3a6082 commit fdff7c2
Show file tree
Hide file tree
Showing 3,382 changed files with 218,565 additions and 60,376 deletions.
The diff you're trying to view is too large. We only load the first 3000 changed files.
5 changes: 5 additions & 0 deletions .mailmap
Original file line number Diff line number Diff line change
Expand Up @@ -139,6 +139,7 @@ Juha Yrjola <at solidboot.com>
Juha Yrjola <juha.yrjola@nokia.com>
Juha Yrjola <juha.yrjola@solidboot.com>
Julien Thierry <julien.thierry.kdev@gmail.com> <julien.thierry@arm.com>
Kamil Konieczny <k.konieczny@samsung.com> <k.konieczny@partner.samsung.com>
Kay Sievers <kay.sievers@vrfy.org>
Kenneth W Chen <kenneth.w.chen@intel.com>
Konstantin Khlebnikov <koct9i@gmail.com> <k.khlebnikov@samsung.com>
Expand Down Expand Up @@ -210,6 +211,10 @@ Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
Patrick Mochel <mochel@digitalimplant.org>
Paul Burton <paulburton@kernel.org> <paul.burton@imgtec.com>
Paul Burton <paulburton@kernel.org> <paul.burton@mips.com>
Paul E. McKenney <paulmck@kernel.org> <paulmck@linux.ibm.com>
Paul E. McKenney <paulmck@kernel.org> <paulmck@linux.vnet.ibm.com>
Paul E. McKenney <paulmck@kernel.org> <paul.mckenney@linaro.org>
Paul E. McKenney <paulmck@kernel.org> <paulmck@us.ibm.com>
Peter A Jonsson <pj@ludd.ltu.se>
Peter Oruba <peter@oruba.de>
Peter Oruba <peter.oruba@amd.com>
Expand Down
16 changes: 14 additions & 2 deletions Documentation/ABI/testing/ima_policy
Original file line number Diff line number Diff line change
Expand Up @@ -25,11 +25,11 @@ Description:
lsm: [[subj_user=] [subj_role=] [subj_type=]
[obj_user=] [obj_role=] [obj_type=]]
option: [[appraise_type=]] [template=] [permit_directio]
[appraise_flag=]
[appraise_flag=] [keyrings=]
base: func:= [BPRM_CHECK][MMAP_CHECK][CREDS_CHECK][FILE_CHECK][MODULE_CHECK]
[FIRMWARE_CHECK]
[KEXEC_KERNEL_CHECK] [KEXEC_INITRAMFS_CHECK]
[KEXEC_CMDLINE]
[KEXEC_CMDLINE] [KEY_CHECK]
mask:= [[^]MAY_READ] [[^]MAY_WRITE] [[^]MAY_APPEND]
[[^]MAY_EXEC]
fsmagic:= hex value
Expand All @@ -42,6 +42,9 @@ Description:
appraise_flag:= [check_blacklist]
Currently, blacklist check is only for files signed with appended
signature.
keyrings:= list of keyrings
(eg, .builtin_trusted_keys|.ima). Only valid
when action is "measure" and func is KEY_CHECK.
template:= name of a defined IMA template type
(eg, ima-ng). Only valid when action is "measure".
pcr:= decimal value
Expand Down Expand Up @@ -113,3 +116,12 @@ Description:
Example of appraise rule allowing modsig appended signatures:

appraise func=KEXEC_KERNEL_CHECK appraise_type=imasig|modsig

Example of measure rule using KEY_CHECK to measure all keys:

measure func=KEY_CHECK

Example of measure rule using KEY_CHECK to only measure
keys added to .builtin_trusted_keys or .ima keyring:

measure func=KEY_CHECK keyrings=.builtin_trusted_keys|.ima
63 changes: 63 additions & 0 deletions Documentation/ABI/testing/sysfs-bus-mdio
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
What: /sys/bus/mdio_bus/devices/.../statistics/
Date: January 2020
KernelVersion: 5.6
Contact: netdev@vger.kernel.org
Description:
This folder contains statistics about global and per
MDIO bus address statistics.

What: /sys/bus/mdio_bus/devices/.../statistics/transfers
Date: January 2020
KernelVersion: 5.6
Contact: netdev@vger.kernel.org
Description:
Total number of transfers for this MDIO bus.

What: /sys/bus/mdio_bus/devices/.../statistics/errors
Date: January 2020
KernelVersion: 5.6
Contact: netdev@vger.kernel.org
Description:
Total number of transfer errors for this MDIO bus.

What: /sys/bus/mdio_bus/devices/.../statistics/writes
Date: January 2020
KernelVersion: 5.6
Contact: netdev@vger.kernel.org
Description:
Total number of write transactions for this MDIO bus.

What: /sys/bus/mdio_bus/devices/.../statistics/reads
Date: January 2020
KernelVersion: 5.6
Contact: netdev@vger.kernel.org
Description:
Total number of read transactions for this MDIO bus.

What: /sys/bus/mdio_bus/devices/.../statistics/transfers_<addr>
Date: January 2020
KernelVersion: 5.6
Contact: netdev@vger.kernel.org
Description:
Total number of transfers for this MDIO bus address.

What: /sys/bus/mdio_bus/devices/.../statistics/errors_<addr>
Date: January 2020
KernelVersion: 5.6
Contact: netdev@vger.kernel.org
Description:
Total number of transfer errors for this MDIO bus address.

What: /sys/bus/mdio_bus/devices/.../statistics/writes_<addr>
Date: January 2020
KernelVersion: 5.6
Contact: netdev@vger.kernel.org
Description:
Total number of write transactions for this MDIO bus address.

What: /sys/bus/mdio_bus/devices/.../statistics/reads_<addr>
Date: January 2020
KernelVersion: 5.6
Contact: netdev@vger.kernel.org
Description:
Total number of read transactions for this MDIO bus address.
53 changes: 28 additions & 25 deletions Documentation/RCU/NMI-RCU.txt → Documentation/RCU/NMI-RCU.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,7 @@
.. _NMI_rcu_doc:

Using RCU to Protect Dynamic NMI Handlers
=========================================


Although RCU is usually used to protect read-mostly data structures,
Expand All @@ -9,7 +12,7 @@ work in "arch/x86/oprofile/nmi_timer_int.c" and in
"arch/x86/kernel/traps.c".

The relevant pieces of code are listed below, each followed by a
brief explanation.
brief explanation::

static int dummy_nmi_callback(struct pt_regs *regs, int cpu)
{
Expand All @@ -18,12 +21,12 @@ brief explanation.

The dummy_nmi_callback() function is a "dummy" NMI handler that does
nothing, but returns zero, thus saying that it did nothing, allowing
the NMI handler to take the default machine-specific action.
the NMI handler to take the default machine-specific action::

static nmi_callback_t nmi_callback = dummy_nmi_callback;

This nmi_callback variable is a global function pointer to the current
NMI handler.
NMI handler::

void do_nmi(struct pt_regs * regs, long error_code)
{
Expand Down Expand Up @@ -53,11 +56,12 @@ anyway. However, in practice it is a good documentation aid, particularly
for anyone attempting to do something similar on Alpha or on systems
with aggressive optimizing compilers.

Quick Quiz: Why might the rcu_dereference_sched() be necessary on Alpha,
given that the code referenced by the pointer is read-only?
Quick Quiz:
Why might the rcu_dereference_sched() be necessary on Alpha, given that the code referenced by the pointer is read-only?

:ref:`Answer to Quick Quiz <answer_quick_quiz_NMI>`

Back to the discussion of NMI and RCU...
Back to the discussion of NMI and RCU::

void set_nmi_callback(nmi_callback_t callback)
{
Expand All @@ -68,7 +72,7 @@ The set_nmi_callback() function registers an NMI handler. Note that any
data that is to be used by the callback must be initialized up -before-
the call to set_nmi_callback(). On architectures that do not order
writes, the rcu_assign_pointer() ensures that the NMI handler sees the
initialized values.
initialized values::

void unset_nmi_callback(void)
{
Expand All @@ -82,7 +86,7 @@ up any data structures used by the old NMI handler until execution
of it completes on all other CPUs.

One way to accomplish this is via synchronize_rcu(), perhaps as
follows:
follows::

unset_nmi_callback();
synchronize_rcu();
Expand All @@ -98,24 +102,23 @@ to free up the handler's data as soon as synchronize_rcu() returns.
Important note: for this to work, the architecture in question must
invoke nmi_enter() and nmi_exit() on NMI entry and exit, respectively.

.. _answer_quick_quiz_NMI:

Answer to Quick Quiz

Why might the rcu_dereference_sched() be necessary on Alpha, given
that the code referenced by the pointer is read-only?
Answer to Quick Quiz:
Why might the rcu_dereference_sched() be necessary on Alpha, given that the code referenced by the pointer is read-only?

Answer: The caller to set_nmi_callback() might well have
initialized some data that is to be used by the new NMI
handler. In this case, the rcu_dereference_sched() would
be needed, because otherwise a CPU that received an NMI
just after the new handler was set might see the pointer
to the new NMI handler, but the old pre-initialized
version of the handler's data.
The caller to set_nmi_callback() might well have
initialized some data that is to be used by the new NMI
handler. In this case, the rcu_dereference_sched() would
be needed, because otherwise a CPU that received an NMI
just after the new handler was set might see the pointer
to the new NMI handler, but the old pre-initialized
version of the handler's data.

This same sad story can happen on other CPUs when using
a compiler with aggressive pointer-value speculation
optimizations.
This same sad story can happen on other CPUs when using
a compiler with aggressive pointer-value speculation
optimizations.

More important, the rcu_dereference_sched() makes it
clear to someone reading the code that the pointer is
being protected by RCU-sched.
More important, the rcu_dereference_sched() makes it
clear to someone reading the code that the pointer is
being protected by RCU-sched.
34 changes: 23 additions & 11 deletions Documentation/RCU/arrayRCU.txt → Documentation/RCU/arrayRCU.rst
Original file line number Diff line number Diff line change
@@ -1,19 +1,21 @@
Using RCU to Protect Read-Mostly Arrays
.. _array_rcu_doc:

Using RCU to Protect Read-Mostly Arrays
=======================================

Although RCU is more commonly used to protect linked lists, it can
also be used to protect arrays. Three situations are as follows:

1. Hash Tables
1. :ref:`Hash Tables <hash_tables>`

2. Static Arrays
2. :ref:`Static Arrays <static_arrays>`

3. Resizeable Arrays
3. :ref:`Resizable Arrays <resizable_arrays>`

Each of these three situations involves an RCU-protected pointer to an
array that is separately indexed. It might be tempting to consider use
of RCU to instead protect the index into an array, however, this use
case is -not- supported. The problem with RCU-protected indexes into
case is **not** supported. The problem with RCU-protected indexes into
arrays is that compilers can play way too many optimization games with
integers, which means that the rules governing handling of these indexes
are far more trouble than they are worth. If RCU-protected indexes into
Expand All @@ -24,30 +26,38 @@ to be safely used.
That aside, each of the three RCU-protected pointer situations are
described in the following sections.

.. _hash_tables:

Situation 1: Hash Tables
------------------------

Hash tables are often implemented as an array, where each array entry
has a linked-list hash chain. Each hash chain can be protected by RCU
as described in the listRCU.txt document. This approach also applies
to other array-of-list situations, such as radix trees.

.. _static_arrays:

Situation 2: Static Arrays
--------------------------

Static arrays, where the data (rather than a pointer to the data) is
located in each array element, and where the array is never resized,
have not been used with RCU. Rik van Riel recommends using seqlock in
this situation, which would also have minimal read-side overhead as long
as updates are rare.

Quick Quiz: Why is it so important that updates be rare when
using seqlock?
Quick Quiz:
Why is it so important that updates be rare when using seqlock?

:ref:`Answer to Quick Quiz <answer_quick_quiz_seqlock>`

.. _resizable_arrays:

Situation 3: Resizeable Arrays
Situation 3: Resizable Arrays
------------------------------

Use of RCU for resizeable arrays is demonstrated by the grow_ary()
Use of RCU for resizable arrays is demonstrated by the grow_ary()
function formerly used by the System V IPC code. The array is used
to map from semaphore, message-queue, and shared-memory IDs to the data
structure that represents the corresponding IPC construct. The grow_ary()
Expand All @@ -60,7 +70,7 @@ the remainder of the new, updates the ids->entries pointer to point to
the new array, and invokes ipc_rcu_putref() to free up the old array.
Note that rcu_assign_pointer() is used to update the ids->entries pointer,
which includes any memory barriers required on whatever architecture
you are running on.
you are running on::

static int grow_ary(struct ipc_ids* ids, int newsize)
{
Expand Down Expand Up @@ -112,7 +122,7 @@ a simple check suffices. The pointer to the structure corresponding
to the desired IPC object is placed in "out", with NULL indicating
a non-existent entry. After acquiring "out->lock", the "out->deleted"
flag indicates whether the IPC object is in the process of being
deleted, and, if not, the pointer is returned.
deleted, and, if not, the pointer is returned::

struct kern_ipc_perm* ipc_lock(struct ipc_ids* ids, int id)
{
Expand Down Expand Up @@ -144,8 +154,10 @@ deleted, and, if not, the pointer is returned.
return out;
}

.. _answer_quick_quiz_seqlock:

Answer to Quick Quiz:
Why is it so important that updates be rare when using seqlock?

The reason that it is important that updates be rare when
using seqlock is that frequent updates can livelock readers.
Expand Down
5 changes: 5 additions & 0 deletions Documentation/RCU/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,13 @@ RCU concepts
.. toctree::
:maxdepth: 3

arrayRCU
rcubarrier
rcu_dereference
whatisRCU
rcu
listRCU
NMI-RCU
UP

Design/Memory-Ordering/Tree-RCU-Memory-Ordering
Expand Down
2 changes: 1 addition & 1 deletion Documentation/RCU/lockdep-splat.txt
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ With this change, the rcu_dereference() is always within an RCU
read-side critical section, which again would have suppressed the
above lockdep-RCU splat.

But in this particular case, we don't actually deference the pointer
But in this particular case, we don't actually dereference the pointer
returned from rcu_dereference(). Instead, that pointer is just compared
to the cic pointer, which means that the rcu_dereference() can be replaced
by rcu_access_pointer() as follows:
Expand Down
Loading

0 comments on commit fdff7c2

Please sign in to comment.