Skip to content

Commit

Permalink
Merge branch 'mm-nonmm-unstable' into mm-everything
Browse files Browse the repository at this point in the history
  • Loading branch information
Andrew Morton committed Feb 16, 2023
2 parents 440b18f + 7698bb9 commit fba720c
Show file tree
Hide file tree
Showing 70 changed files with 1,805 additions and 322 deletions.
6 changes: 3 additions & 3 deletions CREDITS
Original file line number Diff line number Diff line change
Expand Up @@ -1848,11 +1848,11 @@ E: ajoshi@shell.unixbox.com
D: fbdev hacking

N: Jesper Juhl
E: jj@chaosbits.net
E: jesperjuhl76@gmail.com
D: Various fixes, cleanups and minor features all over the tree.
D: Wrote initial version of the hdaps driver (since passed on to others).
S: Lemnosvej 1, 3.tv
S: 2300 Copenhagen S.
S: Titangade 5G, 2.tv
S: 2200 Copenhagen N.
S: Denmark

N: Jozsef Kadlecsik
Expand Down
14 changes: 7 additions & 7 deletions Documentation/accounting/delay-accounting.rst
Original file line number Diff line number Diff line change
Expand Up @@ -109,17 +109,17 @@ Get sum of delays, since system boot, for all pids with tgid 5::
CPU count real total virtual total delay total delay average
8 7000000 6872122 3382277 0.423ms
IO count delay total delay average
0 0 0ms
0 0 0.000ms
SWAP count delay total delay average
0 0 0ms
0 0 0.000ms
RECLAIM count delay total delay average
0 0 0ms
0 0 0.000ms
THRASHING count delay total delay average
0 0 0ms
0 0 0.000ms
COMPACT count delay total delay average
0 0 0ms
WPCOPY count delay total delay average
0 0 0ms
0 0 0.000ms
WPCOPY count delay total delay average
0 0 0.000ms

Get IO accounting for pid 1, it works only with -p::

Expand Down
25 changes: 22 additions & 3 deletions Documentation/admin-guide/sysctl/kernel.rst
Original file line number Diff line number Diff line change
Expand Up @@ -453,16 +453,35 @@ this allows system administrators to override the
kexec_load_disabled
===================

A toggle indicating if the ``kexec_load`` syscall has been disabled.
This value defaults to 0 (false: ``kexec_load`` enabled), but can be
set to 1 (true: ``kexec_load`` disabled).
A toggle indicating if the syscalls ``kexec_load`` and
``kexec_file_load`` have been disabled.
This value defaults to 0 (false: ``kexec_*load`` enabled), but can be
set to 1 (true: ``kexec_*load`` disabled).
Once true, kexec can no longer be used, and the toggle cannot be set
back to false.
This allows a kexec image to be loaded before disabling the syscall,
allowing a system to set up (and later use) an image without it being
altered.
Generally used together with the `modules_disabled`_ sysctl.

kexec_load_limit_panic
======================

This parameter specifies a limit to the number of times the syscalls
``kexec_load`` and ``kexec_file_load`` can be called with a crash
image. It can only be set with a more restrictive value than the
current one.

== ======================================================
-1 Unlimited calls to kexec. This is the default setting.
N Number of calls left.
== ======================================================

kexec_load_limit_reboot
=======================

Similar functionality as ``kexec_load_limit_panic``, but for a normal
image.

kptr_restrict
=============
Expand Down
65 changes: 65 additions & 0 deletions Documentation/fault-injection/fault-injection.rst
Original file line number Diff line number Diff line change
Expand Up @@ -231,6 +231,71 @@ proc entries
This feature is intended for systematic testing of faults in a single
system call. See an example below.


Error Injectable Functions
--------------------------

This part is for the kenrel developers considering to add a function to
ALLOW_ERROR_INJECTION() macro.

Requirements for the Error Injectable Functions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Since the function-level error injection forcibly changes the code path
and returns an error even if the input and conditions are proper, this can
cause unexpected kernel crash if you allow error injection on the function
which is NOT error injectable. Thus, you (and reviewers) must ensure;

- The function returns an error code if it fails, and the callers must check
it correctly (need to recover from it).

- The function does not execute any code which can change any state before
the first error return. The state includes global or local, or input
variable. For example, clear output address storage (e.g. `*ret = NULL`),
increments/decrements counter, set a flag, preempt/irq disable or get
a lock (if those are recovered before returning error, that will be OK.)

The first requirement is important, and it will result in that the release
(free objects) functions are usually harder to inject errors than allocate
functions. If errors of such release functions are not correctly handled
it will cause a memory leak easily (the caller will confuse that the object
has been released or corrupted.)

The second one is for the caller which expects the function should always
does something. Thus if the function error injection skips whole of the
function, the expectation is betrayed and causes an unexpected error.

Type of the Error Injectable Functions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Each error injectable functions will have the error type specified by the
ALLOW_ERROR_INJECTION() macro. You have to choose it carefully if you add
a new error injectable function. If the wrong error type is chosen, the
kernel may crash because it may not be able to handle the error.
There are 4 types of errors defined in include/asm-generic/error-injection.h

EI_ETYPE_NULL
This function will return `NULL` if it fails. e.g. return an allocateed
object address.

EI_ETYPE_ERRNO
This function will return an `-errno` error code if it fails. e.g. return
-EINVAL if the input is wrong. This will include the functions which will
return an address which encodes `-errno` by ERR_PTR() macro.

EI_ETYPE_ERRNO_NULL
This function will return an `-errno` or `NULL` if it fails. If the caller
of this function checks the return value with IS_ERR_OR_NULL() macro, this
type will be appropriate.

EI_ETYPE_TRUE
This function will return `true` (non-zero positive value) if it fails.

If you specifies a wrong type, for example, EI_TYPE_ERRNO for the function
which returns an allocated object, it may cause a problem because the returned
value is not an object address and the caller can not access to the address.


How to add new fault injection capability
-----------------------------------------

Expand Down
10 changes: 5 additions & 5 deletions Documentation/translations/zh_CN/accounting/delay-accounting.rst
Original file line number Diff line number Diff line change
Expand Up @@ -91,15 +91,15 @@ getdelays命令的一般格式::
CPU count real total virtual total delay total delay average
8 7000000 6872122 3382277 0.423ms
IO count delay total delay average
0 0 0ms
0 0 0.000ms
SWAP count delay total delay average
0 0 0ms
0 0 0.000ms
RECLAIM count delay total delay average
0 0 0ms
0 0 0.000ms
THRASHING count delay total delay average
0 0 0ms
0 0 0.000ms
COMPACT count delay total delay average
0 0 0ms
0 0 0.000ms

获取pid为1的IO计数,它只和-p一起使用::
# ./getdelays -i -p 1
Expand Down
128 changes: 64 additions & 64 deletions arch/Kconfig
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ config HOTPLUG_SMT
bool

config GENERIC_ENTRY
bool
bool

config KPROBES
bool "Kprobes"
Expand All @@ -55,26 +55,26 @@ config JUMP_LABEL
depends on HAVE_ARCH_JUMP_LABEL
select OBJTOOL if HAVE_JUMP_LABEL_HACK
help
This option enables a transparent branch optimization that
makes certain almost-always-true or almost-always-false branch
conditions even cheaper to execute within the kernel.
This option enables a transparent branch optimization that
makes certain almost-always-true or almost-always-false branch
conditions even cheaper to execute within the kernel.

Certain performance-sensitive kernel code, such as trace points,
scheduler functionality, networking code and KVM have such
branches and include support for this optimization technique.
Certain performance-sensitive kernel code, such as trace points,
scheduler functionality, networking code and KVM have such
branches and include support for this optimization technique.

If it is detected that the compiler has support for "asm goto",
the kernel will compile such branches with just a nop
instruction. When the condition flag is toggled to true, the
nop will be converted to a jump instruction to execute the
conditional block of instructions.
If it is detected that the compiler has support for "asm goto",
the kernel will compile such branches with just a nop
instruction. When the condition flag is toggled to true, the
nop will be converted to a jump instruction to execute the
conditional block of instructions.

This technique lowers overhead and stress on the branch prediction
of the processor and generally makes the kernel faster. The update
of the condition is slower, but those are always very rare.
This technique lowers overhead and stress on the branch prediction
of the processor and generally makes the kernel faster. The update
of the condition is slower, but those are always very rare.

( On 32-bit x86, the necessary options added to the compiler
flags may increase the size of the kernel slightly. )
( On 32-bit x86, the necessary options added to the compiler
flags may increase the size of the kernel slightly. )

config STATIC_KEYS_SELFTEST
bool "Static key selftest"
Expand All @@ -98,9 +98,9 @@ config KPROBES_ON_FTRACE
depends on KPROBES && HAVE_KPROBES_ON_FTRACE
depends on DYNAMIC_FTRACE_WITH_REGS
help
If function tracer is enabled and the arch supports full
passing of pt_regs to function tracing, then kprobes can
optimize on top of function tracing.
If function tracer is enabled and the arch supports full
passing of pt_regs to function tracing, then kprobes can
optimize on top of function tracing.

config UPROBES
def_bool n
Expand Down Expand Up @@ -154,21 +154,21 @@ config HAVE_EFFICIENT_UNALIGNED_ACCESS
config ARCH_USE_BUILTIN_BSWAP
bool
help
Modern versions of GCC (since 4.4) have builtin functions
for handling byte-swapping. Using these, instead of the old
inline assembler that the architecture code provides in the
__arch_bswapXX() macros, allows the compiler to see what's
happening and offers more opportunity for optimisation. In
particular, the compiler will be able to combine the byteswap
with a nearby load or store and use load-and-swap or
store-and-swap instructions if the architecture has them. It
should almost *never* result in code which is worse than the
hand-coded assembler in <asm/swab.h>. But just in case it
does, the use of the builtins is optional.
Modern versions of GCC (since 4.4) have builtin functions
for handling byte-swapping. Using these, instead of the old
inline assembler that the architecture code provides in the
__arch_bswapXX() macros, allows the compiler to see what's
happening and offers more opportunity for optimisation. In
particular, the compiler will be able to combine the byteswap
with a nearby load or store and use load-and-swap or
store-and-swap instructions if the architecture has them. It
should almost *never* result in code which is worse than the
hand-coded assembler in <asm/swab.h>. But just in case it
does, the use of the builtins is optional.

Any architecture with load-and-swap or store-and-swap
instructions should set this. And it shouldn't hurt to set it
on architectures that don't have such instructions.
Any architecture with load-and-swap or store-and-swap
instructions should set this. And it shouldn't hurt to set it
on architectures that don't have such instructions.

config KRETPROBES
def_bool y
Expand Down Expand Up @@ -720,13 +720,13 @@ config LTO_CLANG_FULL
depends on !COMPILE_TEST
select LTO_CLANG
help
This option enables Clang's full Link Time Optimization (LTO), which
allows the compiler to optimize the kernel globally. If you enable
this option, the compiler generates LLVM bitcode instead of ELF
object files, and the actual compilation from bitcode happens at
the LTO link step, which may take several minutes depending on the
kernel configuration. More information can be found from LLVM's
documentation:
This option enables Clang's full Link Time Optimization (LTO), which
allows the compiler to optimize the kernel globally. If you enable
this option, the compiler generates LLVM bitcode instead of ELF
object files, and the actual compilation from bitcode happens at
the LTO link step, which may take several minutes depending on the
kernel configuration. More information can be found from LLVM's
documentation:

https://llvm.org/docs/LinkTimeOptimization.html

Expand Down Expand Up @@ -1330,9 +1330,9 @@ config ARCH_HAS_CC_PLATFORM
bool

config HAVE_SPARSE_SYSCALL_NR
bool
help
An architecture should select this if its syscall numbering is sparse
bool
help
An architecture should select this if its syscall numbering is sparse
to save space. For example, MIPS architecture has a syscall array with
entries at 4000, 5000 and 6000 locations. This option turns on syscall
related optimizations for a given architecture.
Expand All @@ -1356,35 +1356,35 @@ config HAVE_PREEMPT_DYNAMIC_CALL
depends on HAVE_STATIC_CALL
select HAVE_PREEMPT_DYNAMIC
help
An architecture should select this if it can handle the preemption
model being selected at boot time using static calls.
An architecture should select this if it can handle the preemption
model being selected at boot time using static calls.

Where an architecture selects HAVE_STATIC_CALL_INLINE, any call to a
preemption function will be patched directly.
Where an architecture selects HAVE_STATIC_CALL_INLINE, any call to a
preemption function will be patched directly.

Where an architecture does not select HAVE_STATIC_CALL_INLINE, any
call to a preemption function will go through a trampoline, and the
trampoline will be patched.
Where an architecture does not select HAVE_STATIC_CALL_INLINE, any
call to a preemption function will go through a trampoline, and the
trampoline will be patched.

It is strongly advised to support inline static call to avoid any
overhead.
It is strongly advised to support inline static call to avoid any
overhead.

config HAVE_PREEMPT_DYNAMIC_KEY
bool
depends on HAVE_ARCH_JUMP_LABEL
select HAVE_PREEMPT_DYNAMIC
help
An architecture should select this if it can handle the preemption
model being selected at boot time using static keys.
An architecture should select this if it can handle the preemption
model being selected at boot time using static keys.

Each preemption function will be given an early return based on a
static key. This should have slightly lower overhead than non-inline
static calls, as this effectively inlines each trampoline into the
start of its callee. This may avoid redundant work, and may
integrate better with CFI schemes.
Each preemption function will be given an early return based on a
static key. This should have slightly lower overhead than non-inline
static calls, as this effectively inlines each trampoline into the
start of its callee. This may avoid redundant work, and may
integrate better with CFI schemes.

This will have greater overhead than using inline static calls as
the call to the preemption function cannot be entirely elided.
This will have greater overhead than using inline static calls as
the call to the preemption function cannot be entirely elided.

config ARCH_WANT_LD_ORPHAN_WARN
bool
Expand All @@ -1407,8 +1407,8 @@ config ARCH_SUPPORTS_PAGE_TABLE_CHECK
config ARCH_SPLIT_ARG64
bool
help
If a 32-bit architecture requires 64-bit arguments to be split into
pairs of 32-bit arguments, select this option.
If a 32-bit architecture requires 64-bit arguments to be split into
pairs of 32-bit arguments, select this option.

config ARCH_HAS_ELFCORE_COMPAT
bool
Expand Down
2 changes: 1 addition & 1 deletion arch/alpha/kernel/process.c
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ struct halt_info {
static void
common_shutdown_1(void *generic_ptr)
{
struct halt_info *how = (struct halt_info *)generic_ptr;
struct halt_info *how = generic_ptr;
struct percpu_struct *cpup;
unsigned long *pflags, flags;
int cpuid = smp_processor_id();
Expand Down
4 changes: 2 additions & 2 deletions arch/alpha/kernel/smp.c
Original file line number Diff line number Diff line change
Expand Up @@ -628,7 +628,7 @@ flush_tlb_all(void)
static void
ipi_flush_tlb_mm(void *x)
{
struct mm_struct *mm = (struct mm_struct *) x;
struct mm_struct *mm = x;
if (mm == current->active_mm && !asn_locked())
flush_tlb_current(mm);
else
Expand Down Expand Up @@ -670,7 +670,7 @@ struct flush_tlb_page_struct {
static void
ipi_flush_tlb_page(void *x)
{
struct flush_tlb_page_struct *data = (struct flush_tlb_page_struct *)x;
struct flush_tlb_page_struct *data = x;
struct mm_struct * mm = data->mm;

if (mm == current->active_mm && !asn_locked())
Expand Down
Loading

0 comments on commit fba720c

Please sign in to comment.