Skip to content

Commit

Permalink
Merge branch 'mm-everything' of git://git.kernel.org/pub/scm/linux/ke…
Browse files Browse the repository at this point in the history
…rnel/git/akpm/mm
  • Loading branch information
Stephen Rothwell committed Jan 5, 2023
2 parents b4bf05f + dd5c3ba commit 86c7957
Show file tree
Hide file tree
Showing 133 changed files with 5,524 additions and 1,703 deletions.
29 changes: 29 additions & 0 deletions Documentation/ABI/testing/sysfs-kernel-mm-damon
Original file line number Diff line number Diff line change
Expand Up @@ -258,6 +258,35 @@ Contact: SeongJae Park <sj@kernel.org>
Description: Writing to and reading from this file sets and gets the low
watermark of the scheme in permil.

What: /sys/kernel/mm/damon/admin/kdamonds/<K>/contexts/<C>/schemes/<S>/filters/nr_filters
Date: Dec 2022
Contact: SeongJae Park <sj@kernel.org>
Description: Writing a number 'N' to this file creates the number of
directories for setting filters of the scheme named '0' to
'N-1' under the filters/ directory.

What: /sys/kernel/mm/damon/admin/kdamonds/<K>/contexts/<C>/schemes/<S>/filters/<F>/type
Date: Dec 2022
Contact: SeongJae Park <sj@kernel.org>
Description: Writing to and reading from this file sets and gets the type of
the memory of the interest. 'anon' for anonymous pages, or
'memcg' for specific memory cgroup can be written and read.

What: /sys/kernel/mm/damon/admin/kdamonds/<K>/contexts/<C>/schemes/<S>/filters/<F>/memcg_path
Date: Dec 2022
Contact: SeongJae Park <sj@kernel.org>
Description: If 'memcg' is written to the 'type' file, writing to and
reading from this file sets and gets the path to the memory
cgroup of the interest.

What: /sys/kernel/mm/damon/admin/kdamonds/<K>/contexts/<C>/schemes/<S>/filters/<F>/matching
Date: Dec 2022
Contact: SeongJae Park <sj@kernel.org>
Description: Writing 'Y' or 'N' to this file sets whether to filter out
pages that do or do not match to the 'type' and 'memcg_path',
respectively. Filter out means the action of the scheme will
not be applied to.

What: /sys/kernel/mm/damon/admin/kdamonds/<K>/contexts/<C>/schemes/<S>/stats/nr_tried
Date: Mar 2022
Contact: SeongJae Park <sj@kernel.org>
Expand Down
13 changes: 11 additions & 2 deletions Documentation/admin-guide/cgroup-v1/memory.rst
Original file line number Diff line number Diff line change
Expand Up @@ -86,6 +86,8 @@ Brief summary of control files.
memory.swappiness set/show swappiness parameter of vmscan
(See sysctl's vm.swappiness)
memory.move_charge_at_immigrate set/show controls of moving charges
This knob is deprecated and shouldn't be
used.
memory.oom_control set/show oom controls.
memory.numa_stat show the number of memory usage per numa
node
Expand Down Expand Up @@ -717,8 +719,15 @@ NOTE2:
It is recommended to set the soft limit always below the hard limit,
otherwise the hard limit will take precedence.

8. Move charges at task migration
=================================
8. Move charges at task migration (DEPRECATED!)
===============================================

THIS IS DEPRECATED!

It's expensive and unreliable! It's better practice to launch workload
tasks directly from inside their target cgroup. Use dedicated workload
cgroups to allow fine-grained policy adjustments without having to
move physical pages between control domains.

Users can move charges associated with a task along with task migration, that
is, uncharge task's pages from the old cgroup and charge them to the new cgroup.
Expand Down
15 changes: 6 additions & 9 deletions Documentation/admin-guide/cgroup-v2.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1245,13 +1245,17 @@ PAGE_SIZE multiple when read back.
This is a simple interface to trigger memory reclaim in the
target cgroup.

This file accepts a string which contains the number of bytes to
reclaim.
This file accepts a single key, the number of bytes to reclaim.
No nested keys are currently supported.

Example::

echo "1G" > memory.reclaim

The interface can be later extended with nested keys to
configure the reclaim behavior. For example, specify the
type of memory to reclaim from (anon, file, ..).

Please note that the kernel can over or under reclaim from
the target cgroup. If less bytes are reclaimed than the
specified amount, -EAGAIN is returned.
Expand All @@ -1263,13 +1267,6 @@ PAGE_SIZE multiple when read back.
This means that the networking layer will not adapt based on
reclaim induced by memory.reclaim.

This file also allows the user to specify the nodes to reclaim from,
via the 'nodes=' key, for example::

echo "1G nodes=0,1" > memory.reclaim

The above instructs the kernel to reclaim memory from nodes 0,1.

memory.peak
A read-only single value file which exists on non-root
cgroups.
Expand Down
9 changes: 9 additions & 0 deletions Documentation/admin-guide/mm/damon/reclaim.rst
Original file line number Diff line number Diff line change
Expand Up @@ -205,6 +205,15 @@ The end physical address of memory region that DAMON_RECLAIM will do work
against. That is, DAMON_RECLAIM will find cold memory regions in this region
and reclaims. By default, biggest System RAM is used as the region.

skip_anon
---------

Skip anonymous pages reclamation.

If this parameter is set as ``Y``, DAMON_RECLAIM does not reclaim anonymous
pages. By default, ``N``.


kdamond_pid
-----------

Expand Down
48 changes: 46 additions & 2 deletions Documentation/admin-guide/mm/damon/usage.rst
Original file line number Diff line number Diff line change
Expand Up @@ -87,6 +87,8 @@ comma (","). ::
│ │ │ │ │ │ │ quotas/ms,bytes,reset_interval_ms
│ │ │ │ │ │ │ │ weights/sz_permil,nr_accesses_permil,age_permil
│ │ │ │ │ │ │ watermarks/metric,interval_us,high,mid,low
│ │ │ │ │ │ │ filters/nr_filters
│ │ │ │ │ │ │ │ 0/type,matching,memcg_id
│ │ │ │ │ │ │ stats/nr_tried,sz_tried,nr_applied,sz_applied,qt_exceeds
│ │ │ │ │ │ │ tried_regions/
│ │ │ │ │ │ │ │ 0/start,end,nr_accesses,age
Expand Down Expand Up @@ -151,6 +153,8 @@ number (``N``) to the file creates the number of child directories named as
moment, only one context per kdamond is supported, so only ``0`` or ``1`` can
be written to the file.

.. _sysfs_contexts:

contexts/<N>/
-------------

Expand Down Expand Up @@ -268,8 +272,8 @@ schemes/<N>/
------------

In each scheme directory, five directories (``access_pattern``, ``quotas``,
``watermarks``, ``stats``, and ``tried_regions``) and one file (``action``)
exist.
``watermarks``, ``filters``, ``stats``, and ``tried_regions``) and one file
(``action``) exist.

The ``action`` file is for setting and getting what action you want to apply to
memory regions having specific access pattern of the interest. The keywords
Expand Down Expand Up @@ -347,6 +351,46 @@ as below.

The ``interval`` should written in microseconds unit.

schemes/<N>/filters/
--------------------

Users could know something more than the kernel for specific types of memory.
In the case, users could do their own management for the memory and hence
doesn't want DAMOS bothers that. Users could limit DAMOS by setting the access
pattern of the scheme and/or the monitoring regions for the purpose, but that
can be inefficient in some cases. In such cases, users could set non-access
pattern driven filters using files in this directory.

In the beginning, this directory has only one file, ``nr_filters``. Writing a
number (``N``) to the file creates the number of child directories named ``0``
to ``N-1``. Each directory represents each filter. The filters are evaluated
in the numeric order.

Each filter directory contains three files, namely ``type``, ``matcing``, and
``memcg_path``. You can write one of two special keywords, ``anon`` for
anonymous pages, or ``memcg`` for specific memory cgroup filtering. In case of
the memory cgroup filtering, you can specify the memory cgroup of the interest
by writing the path of the memory cgroup from the cgroups mount point to
``memcg_path`` file. You can write ``Y`` or ``N`` to ``matching`` file to
filter out pages that does or does not match to the type, respectively. Then,
the scheme's action will not be applied to the pages that specified to be
filtered out.

For example, below restricts a DAMOS action to be applied to only non-anonymous
pages of all memory cgroups except ``/having_care_already``.::

# echo 2 > nr_filters
# # filter out anonymous pages
echo anon > 0/type
echo Y > 0/matching
# # further filter out all cgroups except one at '/having_care_already'
echo memcg > 1/type
echo /having_care_already > 1/memcg_path
echo N > 1/matching

Note that filters could be ignored depend on the running DAMON operations set
`implementation <sysfs_contexts>`.

.. _sysfs_schemes_stats:

schemes/<N>/stats/
Expand Down
7 changes: 7 additions & 0 deletions Documentation/admin-guide/mm/ksm.rst
Original file line number Diff line number Diff line change
Expand Up @@ -173,6 +173,13 @@ stable_node_chains
the number of KSM pages that hit the ``max_page_sharing`` limit
stable_node_dups
number of duplicated KSM pages
zero_pages_sharing
how many empty pages are sharing kernel zero page(s) instead of
with each other as it would happen normally. Only effective when
enabling ``use_zero_pages`` knob.

When enabling ``use_zero_pages``, the sum of ``pages_sharing`` +
``zero_pages_sharing`` represents how much really saved by KSM.

A high ratio of ``pages_sharing`` to ``pages_shared`` indicates good
sharing, but a high ratio of ``pages_unshared`` to ``pages_sharing``
Expand Down
17 changes: 17 additions & 0 deletions Documentation/dev-tools/kasan.rst
Original file line number Diff line number Diff line change
Expand Up @@ -140,6 +140,23 @@ disabling KASAN altogether or controlling its features:
- ``kasan.vmalloc=off`` or ``=on`` disables or enables tagging of vmalloc
allocations (default: ``on``).

- ``kasan.page_alloc.sample=<sampling interval>`` makes KASAN tag only every
Nth page_alloc allocation with the order equal or greater than
``kasan.page_alloc.sample.order``, where N is the value of the ``sample``
parameter (default: ``1``, or tag every such allocation).
This parameter is intended to mitigate the performance overhead introduced
by KASAN.
Note that enabling this parameter makes Hardware Tag-Based KASAN skip checks
of allocations chosen by sampling and thus miss bad accesses to these
allocations. Use the default value for accurate bug detection.

- ``kasan.page_alloc.sample.order=<minimum page order>`` specifies the minimum
order of allocations that are affected by sampling (default: ``3``).
Only applies when ``kasan.page_alloc.sample`` is set to a value greater
than ``1``.
This parameter is intended to allow sampling only large page_alloc
allocations, which is the biggest source of the performance overhead.

Error reports
~~~~~~~~~~~~~

Expand Down
65 changes: 65 additions & 0 deletions Documentation/fault-injection/fault-injection.rst
Original file line number Diff line number Diff line change
Expand Up @@ -231,6 +231,71 @@ proc entries
This feature is intended for systematic testing of faults in a single
system call. See an example below.


Error Injectable Functions
--------------------------

This part is for the kenrel developers considering to add a function to
ALLOW_ERROR_INJECTION() macro.

Requirements for the Error Injectable Functions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Since the function-level error injection forcibly changes the code path
and returns an error even if the input and conditions are proper, this can
cause unexpected kernel crash if you allow error injection on the function
which is NOT error injectable. Thus, you (and reviewers) must ensure;

- The function returns an error code if it fails, and the callers must check
it correctly (need to recover from it).

- The function does not execute any code which can change any state before
the first error return. The state includes global or local, or input
variable. For example, clear output address storage (e.g. `*ret = NULL`),
increments/decrements counter, set a flag, preempt/irq disable or get
a lock (if those are recovered before returning error, that will be OK.)

The first requirement is important, and it will result in that the release
(free objects) functions are usually harder to inject errors than allocate
functions. If errors of such release functions are not correctly handled
it will cause a memory leak easily (the caller will confuse that the object
has been released or corrupted.)

The second one is for the caller which expects the function should always
does something. Thus if the function error injection skips whole of the
function, the expectation is betrayed and causes an unexpected error.

Type of the Error Injectable Functions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Each error injectable functions will have the error type specified by the
ALLOW_ERROR_INJECTION() macro. You have to choose it carefully if you add
a new error injectable function. If the wrong error type is chosen, the
kernel may crash because it may not be able to handle the error.
There are 4 types of errors defined in include/asm-generic/error-injection.h

EI_ETYPE_NULL
This function will return `NULL` if it fails. e.g. return an allocateed
object address.

EI_ETYPE_ERRNO
This function will return an `-errno` error code if it fails. e.g. return
-EINVAL if the input is wrong. This will include the functions which will
return an address which encodes `-errno` by ERR_PTR() macro.

EI_ETYPE_ERRNO_NULL
This function will return an `-errno` or `NULL` if it fails. If the caller
of this function checks the return value with IS_ERR_OR_NULL() macro, this
type will be appropriate.

EI_ETYPE_TRUE
This function will return `true` (non-zero positive value) if it fails.

If you specifies a wrong type, for example, EI_TYPE_ERRNO for the function
which returns an allocated object, it may cause a problem because the returned
value is not an object address and the caller can not access to the address.


How to add new fault injection capability
-----------------------------------------

Expand Down
41 changes: 31 additions & 10 deletions Documentation/mm/highmem.rst
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,8 @@ list shows them in order of preference of use.
It can be invoked from any context (including interrupts) but the mappings
can only be used in the context which acquired them.

This function should be preferred, where feasible, over all the others.
This function should always be used. kmap_atomic() and kmap() have been
deprecated.

These mappings are thread-local and CPU-local, meaning that the mapping
can only be accessed from within this thread and the thread is bound to the
Expand Down Expand Up @@ -100,10 +101,21 @@ list shows them in order of preference of use.
(included in the "Functions" section) for details on how to manage nested
mappings.

* kmap_atomic(). This permits a very short duration mapping of a single
page. Since the mapping is restricted to the CPU that issued it, it
performs well, but the issuing task is therefore required to stay on that
CPU until it has finished, lest some other task displace its mappings.
* kmap_atomic(). This function has been deprecated; use kmap_local_page().

NOTE: Conversions to kmap_local_page() must take care to follow the mapping
restrictions imposed on kmap_local_page(). Furthermore, the code between
calls to kmap_atomic() and kunmap_atomic() may implicitly depend on the side
effects of atomic mappings, i.e. disabling page faults or preemption, or both.
In that case, explicit calls to pagefault_disable() or preempt_disable() or
both must be made in conjunction with the use of kmap_local_page().

[Legacy documentation]

This permits a very short duration mapping of a single page. Since the
mapping is restricted to the CPU that issued it, it performs well, but
the issuing task is therefore required to stay on that CPU until it has
finished, lest some other task displace its mappings.

kmap_atomic() may also be used by interrupt contexts, since it does not
sleep and the callers too may not sleep until after kunmap_atomic() is
Expand All @@ -115,11 +127,20 @@ list shows them in order of preference of use.

It is assumed that k[un]map_atomic() won't fail.

* kmap(). This should be used to make short duration mapping of a single
page with no restrictions on preemption or migration. It comes with an
overhead as mapping space is restricted and protected by a global lock
for synchronization. When mapping is no longer needed, the address that
the page was mapped to must be released with kunmap().
* kmap(). This function has been deprecated; use kmap_local_page().

NOTE: Conversions to kmap_local_page() must take care to follow the mapping
restrictions imposed on kmap_local_page(). In particular, it is necessary to
make sure that the kernel virtual memory pointer is only valid in the thread
that obtained it.

[Legacy documentation]

This should be used to make short duration mapping of a single page with no
restrictions on preemption or migration. It comes with an overhead as mapping
space is restricted and protected by a global lock for synchronization. When
mapping is no longer needed, the address that the page was mapped to must be
released with kunmap().

Mapping changes must be propagated across all the CPUs. kmap() also
requires global TLB invalidation when the kmap's pool wraps and it might
Expand Down
8 changes: 4 additions & 4 deletions Documentation/mm/multigen_lru.rst
Original file line number Diff line number Diff line change
Expand Up @@ -89,15 +89,15 @@ variables are monotonically increasing.

Generation numbers are truncated into ``order_base_2(MAX_NR_GENS+1)``
bits in order to fit into the gen counter in ``folio->flags``. Each
truncated generation number is an index to ``lrugen->lists[]``. The
truncated generation number is an index to ``lrugen->folios[]``. The
sliding window technique is used to track at least ``MIN_NR_GENS`` and
at most ``MAX_NR_GENS`` generations. The gen counter stores a value
within ``[1, MAX_NR_GENS]`` while a page is on one of
``lrugen->lists[]``; otherwise it stores zero.
``lrugen->folios[]``; otherwise it stores zero.

Each generation is divided into multiple tiers. A page accessed ``N``
times through file descriptors is in tier ``order_base_2(N)``. Unlike
generations, tiers do not have dedicated ``lrugen->lists[]``. In
generations, tiers do not have dedicated ``lrugen->folios[]``. In
contrast to moving across generations, which requires the LRU lock,
moving across tiers only involves atomic operations on
``folio->flags`` and therefore has a negligible cost. A feedback loop
Expand Down Expand Up @@ -127,7 +127,7 @@ page mapped by this PTE to ``(max_seq%MAX_NR_GENS)+1``.
Eviction
--------
The eviction consumes old generations. Given an ``lruvec``, it
increments ``min_seq`` when ``lrugen->lists[]`` indexed by
increments ``min_seq`` when ``lrugen->folios[]`` indexed by
``min_seq%MAX_NR_GENS`` becomes empty. To select a type and a tier to
evict from, it first compares ``min_seq[]`` to select the older type.
If both types are equally old, it selects the one whose first tier has
Expand Down
Loading

0 comments on commit 86c7957

Please sign in to comment.