Skip to content

Commit

Permalink
Merge v5.2-rc5 into drm-next
Browse files Browse the repository at this point in the history
Maarten needs -rc4 backmerged so he can pull in the fbcon notifier
removal topic branch into drm-misc-next.

Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
  • Loading branch information
Daniel Vetter committed Jun 19, 2019
2 parents 2454fce + 9e0babf commit 52d2d44
Show file tree
Hide file tree
Showing 12,204 changed files with 20,633 additions and 101,122 deletions.
The diff you're trying to view is too large. We only load the first 3000 changed files.
8 changes: 8 additions & 0 deletions CREDITS
Original file line number Diff line number Diff line change
Expand Up @@ -3364,6 +3364,14 @@ S: Braunschweiger Strasse 79
S: 31134 Hildesheim
S: Germany

N: Martin Schwidefsky
D: Martin was the most significant contributor to the initial s390
D: port of the Linux Kernel and later the maintainer of the s390
D: architecture backend for almost two decades.
D: He passed away in 2019, and will be greatly missed.
S: Germany
W: https://lwn.net/Articles/789028/

N: Marcel Selhorst
E: tpmdd@selhorst.net
D: TPM driver
Expand Down
29 changes: 0 additions & 29 deletions Documentation/ABI/testing/sysfs-bus-mdio

This file was deleted.

19 changes: 13 additions & 6 deletions Documentation/ABI/testing/sysfs-class-net-phydev
Original file line number Diff line number Diff line change
Expand Up @@ -11,24 +11,31 @@ Date: February 2014
KernelVersion: 3.15
Contact: netdev@vger.kernel.org
Description:
Boolean value indicating whether the PHY device has
any fixups registered against it (phy_register_fixup)
This attribute contains the boolean value whether a given PHY
device has had any "fixup" workaround running on it, encoded as
a boolean. This information is provided to help troubleshooting
PHY configurations.

What: /sys/class/mdio_bus/<bus>/<device>/phy_id
Date: November 2012
KernelVersion: 3.8
Contact: netdev@vger.kernel.org
Description:
32-bit hexadecimal value corresponding to the PHY device's OUI,
model and revision number.
This attribute contains the 32-bit PHY Identifier as reported
by the device during bus enumeration, encoded in hexadecimal.
This ID is used to match the device with the appropriate
driver.

What: /sys/class/mdio_bus/<bus>/<device>/phy_interface
Date: February 2014
KernelVersion: 3.15
Contact: netdev@vger.kernel.org
Description:
String value indicating the PHY interface, possible
values are:.
This attribute contains the PHY interface as configured by the
Ethernet driver during bus enumeration, encoded in string.
This interface mode is used to configure the Ethernet MAC with the
appropriate mode for its data lines to the PHY hardware.
Possible values are:
<empty> (not available), mii, gmii, sgmii, tbi, rev-mii,
rmii, rgmii, rgmii-id, rgmii-rxid, rgmii-txid, rtbi, smii
xgmii, moca, qsgmii, trgmii, 1000base-x, 2500base-x, rxaui,
Expand Down
9 changes: 9 additions & 0 deletions Documentation/admin-guide/cgroup-v2.rst
Original file line number Diff line number Diff line change
Expand Up @@ -177,6 +177,15 @@ cgroup v2 currently supports the following mount options.
ignored on non-init namespace mounts. Please refer to the
Delegation section for details.

memory_localevents

Only populate memory.events with data for the current cgroup,
and not any subtrees. This is legacy behaviour, the default
behaviour without this option is to include subtree counts.
This option is system wide and can only be set on mount or
modified through remount from the init namespace. The mount
option is ignored on non-init namespace mounts.


Organizing Processes and Threads
--------------------------------
Expand Down
1 change: 1 addition & 0 deletions Documentation/admin-guide/mm/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,7 @@ the Linux memory management.
ksm
memory-hotplug
numa_memory_policy
numaperf
pagemap
soft-dirty
transhuge
Expand Down
2 changes: 1 addition & 1 deletion Documentation/admin-guide/mm/numaperf.rst
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ characteristics. Some memory may share the same node as a CPU, and others
are provided as memory only nodes. While memory only nodes do not provide
CPUs, they may still be local to one or more compute nodes relative to
other nodes. The following diagram shows one such example of two compute
nodes with local memory and a memory only node for each of compute node:
nodes with local memory and a memory only node for each of compute node::

+------------------+ +------------------+
| Compute Node 0 +-----+ Compute Node 1 |
Expand Down
16 changes: 16 additions & 0 deletions Documentation/arm64/sve.txt
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,18 @@ model features for SVE is included in Appendix A.
is to connect to a target process first and then attempt a
ptrace(PTRACE_GETREGSET, pid, NT_ARM_SVE, &iov).

* Whenever SVE scalable register values (Zn, Pn, FFR) are exchanged in memory
between userspace and the kernel, the register value is encoded in memory in
an endianness-invariant layout, with bits [(8 * i + 7) : (8 * i)] encoded at
byte offset i from the start of the memory representation. This affects for
example the signal frame (struct sve_context) and ptrace interface
(struct user_sve_header) and associated data.

Beware that on big-endian systems this results in a different byte order than
for the FPSIMD V-registers, which are stored as single host-endian 128-bit
values, with bits [(127 - 8 * i) : (120 - 8 * i)] of the register encoded at
byte offset i. (struct fpsimd_context, struct user_fpsimd_state).


2. Vector length terminology
-----------------------------
Expand Down Expand Up @@ -124,6 +136,10 @@ the SVE instruction set architecture.
size and layout. Macros SVE_SIG_* are defined [1] to facilitate access to
the members.

* Each scalable register (Zn, Pn, FFR) is stored in an endianness-invariant
layout, with bits [(8 * i + 7) : (8 * i)] stored at byte offset i from the
start of the register's representation in memory.

* If the SVE context is too big to fit in sigcontext.__reserved[], then extra
space is allocated on the stack, an extra_context record is written in
__reserved[] referencing this space. sve_context is then written in the
Expand Down
18 changes: 8 additions & 10 deletions Documentation/block/switching-sched.txt
Original file line number Diff line number Diff line change
Expand Up @@ -13,11 +13,9 @@ you can do so by typing:

# mount none /sys -t sysfs

As of the Linux 2.6.10 kernel, it is now possible to change the
IO scheduler for a given block device on the fly (thus making it possible,
for instance, to set the CFQ scheduler for the system default, but
set a specific device to use the deadline or noop schedulers - which
can improve that device's throughput).
It is possible to change the IO scheduler for a given block device on
the fly to select one of mq-deadline, none, bfq, or kyber schedulers -
which can improve that device's throughput.

To set a specific scheduler, simply do this:

Expand All @@ -30,8 +28,8 @@ The list of defined schedulers can be found by simply doing
a "cat /sys/block/DEV/queue/scheduler" - the list of valid names
will be displayed, with the currently selected scheduler in brackets:

# cat /sys/block/hda/queue/scheduler
noop deadline [cfq]
# echo deadline > /sys/block/hda/queue/scheduler
# cat /sys/block/hda/queue/scheduler
noop [deadline] cfq
# cat /sys/block/sda/queue/scheduler
[mq-deadline] kyber bfq none
# echo none >/sys/block/sda/queue/scheduler
# cat /sys/block/sda/queue/scheduler
[none] mq-deadline kyber bfq
96 changes: 7 additions & 89 deletions Documentation/cgroup-v1/blkio-controller.txt
Original file line number Diff line number Diff line change
Expand Up @@ -8,61 +8,13 @@ both at leaf nodes as well as at intermediate nodes in a storage hierarchy.
Plan is to use the same cgroup based management interface for blkio controller
and based on user options switch IO policies in the background.

Currently two IO control policies are implemented. First one is proportional
weight time based division of disk policy. It is implemented in CFQ. Hence
this policy takes effect only on leaf nodes when CFQ is being used. The second
one is throttling policy which can be used to specify upper IO rate limits
on devices. This policy is implemented in generic block layer and can be
used on leaf nodes as well as higher level logical devices like device mapper.
One IO control policy is throttling policy which can be used to
specify upper IO rate limits on devices. This policy is implemented in
generic block layer and can be used on leaf nodes as well as higher
level logical devices like device mapper.

HOWTO
=====
Proportional Weight division of bandwidth
-----------------------------------------
You can do a very simple testing of running two dd threads in two different
cgroups. Here is what you can do.

- Enable Block IO controller
CONFIG_BLK_CGROUP=y

- Enable group scheduling in CFQ
CONFIG_CFQ_GROUP_IOSCHED=y

- Compile and boot into kernel and mount IO controller (blkio); see
cgroups.txt, Why are cgroups needed?.

mount -t tmpfs cgroup_root /sys/fs/cgroup
mkdir /sys/fs/cgroup/blkio
mount -t cgroup -o blkio none /sys/fs/cgroup/blkio

- Create two cgroups
mkdir -p /sys/fs/cgroup/blkio/test1/ /sys/fs/cgroup/blkio/test2

- Set weights of group test1 and test2
echo 1000 > /sys/fs/cgroup/blkio/test1/blkio.weight
echo 500 > /sys/fs/cgroup/blkio/test2/blkio.weight

- Create two same size files (say 512MB each) on same disk (file1, file2) and
launch two dd threads in different cgroup to read those files.

sync
echo 3 > /proc/sys/vm/drop_caches

dd if=/mnt/sdb/zerofile1 of=/dev/null &
echo $! > /sys/fs/cgroup/blkio/test1/tasks
cat /sys/fs/cgroup/blkio/test1/tasks

dd if=/mnt/sdb/zerofile2 of=/dev/null &
echo $! > /sys/fs/cgroup/blkio/test2/tasks
cat /sys/fs/cgroup/blkio/test2/tasks

- At macro level, first dd should finish first. To get more precise data, keep
on looking at (with the help of script), at blkio.disk_time and
blkio.disk_sectors files of both test1 and test2 groups. This will tell how
much disk time (in milliseconds), each group got and how many sectors each
group dispatched to the disk. We provide fairness in terms of disk time, so
ideally io.disk_time of cgroups should be in proportion to the weight.

Throttling/Upper Limit policy
-----------------------------
- Enable Block IO controller
Expand Down Expand Up @@ -94,7 +46,7 @@ Throttling/Upper Limit policy
Hierarchical Cgroups
====================

Both CFQ and throttling implement hierarchy support; however,
Throttling implements hierarchy support; however,
throttling's hierarchy support is enabled iff "sane_behavior" is
enabled from cgroup side, which currently is a development option and
not publicly available.
Expand All @@ -107,9 +59,8 @@ If somebody created a hierarchy like as follows.
|
test3

CFQ by default and throttling with "sane_behavior" will handle the
hierarchy correctly. For details on CFQ hierarchy support, refer to
Documentation/block/cfq-iosched.txt. For throttling, all limits apply
Throttling with "sane_behavior" will handle the
hierarchy correctly. For throttling, all limits apply
to the whole subtree while all statistics are local to the IOs
directly generated by tasks in that cgroup.

Expand All @@ -130,10 +81,6 @@ CONFIG_DEBUG_BLK_CGROUP
- Debug help. Right now some additional stats file show up in cgroup
if this option is enabled.

CONFIG_CFQ_GROUP_IOSCHED
- Enables group scheduling in CFQ. Currently only 1 level of group
creation is allowed.

CONFIG_BLK_DEV_THROTTLING
- Enable block device throttling support in block layer.

Expand Down Expand Up @@ -344,32 +291,3 @@ Common files among various policies
- blkio.reset_stats
- Writing an int to this file will result in resetting all the stats
for that cgroup.

CFQ sysfs tunable
=================
/sys/block/<disk>/queue/iosched/slice_idle
------------------------------------------
On a faster hardware CFQ can be slow, especially with sequential workload.
This happens because CFQ idles on a single queue and single queue might not
drive deeper request queue depths to keep the storage busy. In such scenarios
one can try setting slice_idle=0 and that would switch CFQ to IOPS
(IO operations per second) mode on NCQ supporting hardware.

That means CFQ will not idle between cfq queues of a cfq group and hence be
able to driver higher queue depth and achieve better throughput. That also
means that cfq provides fairness among groups in terms of IOPS and not in
terms of disk time.

/sys/block/<disk>/queue/iosched/group_idle
------------------------------------------
If one disables idling on individual cfq queues and cfq service trees by
setting slice_idle=0, group_idle kicks in. That means CFQ will still idle
on the group in an attempt to provide fairness among groups.

By default group_idle is same as slice_idle and does not do anything if
slice_idle is enabled.

One can experience an overall throughput drop if you have created multiple
groups and put applications in that group which are not driving enough
IO to keep disk busy. In that case set group_idle=0, and CFQ will not idle
on individual groups and throughput should improve.
22 changes: 13 additions & 9 deletions Documentation/cgroup-v1/hugetlb.txt
Original file line number Diff line number Diff line change
Expand Up @@ -32,14 +32,18 @@ Brief summary of control files
hugetlb.<hugepagesize>.usage_in_bytes # show current usage for "hugepagesize" hugetlb
hugetlb.<hugepagesize>.failcnt # show the number of allocation failure due to HugeTLB limit

For a system supporting two hugepage size (16M and 16G) the control
For a system supporting three hugepage sizes (64k, 32M and 1G), the control
files include:

hugetlb.16GB.limit_in_bytes
hugetlb.16GB.max_usage_in_bytes
hugetlb.16GB.usage_in_bytes
hugetlb.16GB.failcnt
hugetlb.16MB.limit_in_bytes
hugetlb.16MB.max_usage_in_bytes
hugetlb.16MB.usage_in_bytes
hugetlb.16MB.failcnt
hugetlb.1GB.limit_in_bytes
hugetlb.1GB.max_usage_in_bytes
hugetlb.1GB.usage_in_bytes
hugetlb.1GB.failcnt
hugetlb.64KB.limit_in_bytes
hugetlb.64KB.max_usage_in_bytes
hugetlb.64KB.usage_in_bytes
hugetlb.64KB.failcnt
hugetlb.32MB.limit_in_bytes
hugetlb.32MB.max_usage_in_bytes
hugetlb.32MB.usage_in_bytes
hugetlb.32MB.failcnt
2 changes: 1 addition & 1 deletion Documentation/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@
extensions = ['kerneldoc', 'rstFlatTable', 'kernel_include', 'cdomain', 'kfigure', 'sphinx.ext.ifconfig']

# The name of the math extension changed on Sphinx 1.4
if major == 1 and minor > 3:
if (major == 1 and minor > 3) or (major > 1):
extensions.append("sphinx.ext.imgmath")
else:
extensions.append("sphinx.ext.pngmath")
Expand Down
16 changes: 14 additions & 2 deletions Documentation/filesystems/overlayfs.txt
Original file line number Diff line number Diff line change
Expand Up @@ -336,8 +336,20 @@ the copied layers will fail the verification of the lower root file handle.
Non-standard behavior
---------------------

Overlayfs can now act as a POSIX compliant filesystem with the following
features turned on:
Current version of overlayfs can act as a mostly POSIX compliant
filesystem.

This is the list of cases that overlayfs doesn't currently handle:

a) POSIX mandates updating st_atime for reads. This is currently not
done in the case when the file resides on a lower layer.

b) If a file residing on a lower layer is opened for read-only and then
memory mapped with MAP_SHARED, then subsequent changes to the file are not
reflected in the memory mapping.

The following options allow overlayfs to act more like a standards
compliant filesystem:

1) "redirect_dir"

Expand Down
2 changes: 1 addition & 1 deletion Documentation/firmware-guide/acpi/enumeration.rst
Original file line number Diff line number Diff line change
Expand Up @@ -423,7 +423,7 @@ will be enumerated to depends on the device ID returned by _HID.

For example, the following ACPI sample might be used to enumerate an lm75-type
I2C temperature sensor and match it to the driver using the Device Tree
namespace link:
namespace link::

Device (TMP0)
{
Expand Down
Loading

0 comments on commit 52d2d44

Please sign in to comment.