Skip to content

Commit

Permalink
---
Browse files Browse the repository at this point in the history
yaml
---
r: 253736
b: refs/heads/master
c: 9d5ae7c
h: refs/heads/master
v: v3
  • Loading branch information
Grazvydas Ignotas authored and Tony Lindgren committed Jun 13, 2011
1 parent ca18557 commit ddeea77
Show file tree
Hide file tree
Showing 1,119 changed files with 10,281 additions and 23,576 deletions.
2 changes: 1 addition & 1 deletion [refs]
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
---
refs/heads/master: 879669961b11e7f40b518784863a259f735a72bf
refs/heads/master: 9d5ae7cd6cb9ead43336fec1094184d1dc740fbd
8 changes: 0 additions & 8 deletions trunk/CREDITS
Original file line number Diff line number Diff line change
Expand Up @@ -518,14 +518,6 @@ N: Zach Brown
E: zab@zabbo.net
D: maestro pci sound

M: David Brownell
D: Kernel engineer, mentor, and friend. Maintained USB EHCI and
D: gadget layers, SPI subsystem, GPIO subsystem, and more than a few
D: device drivers. His encouragement also helped many engineers get
D: started working on the Linux kernel. David passed away in early
D: 2011, and will be greatly missed.
W: https://lkml.org/lkml/2011/4/5/36

N: Gary Brubaker
E: xavyer@ix.netcom.com
D: USB Serial Empeg Empeg-car Mark I/II Driver
Expand Down

This file was deleted.

17 changes: 12 additions & 5 deletions trunk/Documentation/RCU/trace.txt
Original file line number Diff line number Diff line change
Expand Up @@ -99,11 +99,18 @@ o "qp" indicates that RCU still expects a quiescent state from

o "dt" is the current value of the dyntick counter that is incremented
when entering or leaving dynticks idle state, either by the
scheduler or by irq. This number is even if the CPU is in
dyntick idle mode and odd otherwise. The number after the first
"/" is the interrupt nesting depth when in dyntick-idle state,
or one greater than the interrupt-nesting depth otherwise.
The number after the second "/" is the NMI nesting depth.
scheduler or by irq. The number after the "/" is the interrupt
nesting depth when in dyntick-idle state, or one greater than
the interrupt-nesting depth otherwise.

This field is displayed only for CONFIG_NO_HZ kernels.

o "dn" is the current value of the dyntick counter that is incremented
when entering or leaving dynticks idle state via NMI. If both
the "dt" and "dn" values are even, then this CPU is in dynticks
idle mode and may be ignored by RCU. If either of these two
counters is odd, then RCU must be alert to the possibility of
an RCU read-side critical section running on this CPU.

This field is displayed only for CONFIG_NO_HZ kernels.

Expand Down
4 changes: 2 additions & 2 deletions trunk/Documentation/accounting/cgroupstats.txt
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ information will not be available.
To extract cgroup statistics a utility very similar to getdelays.c
has been developed, the sample output of the utility is shown below

~/balbir/cgroupstats # ./getdelays -C "/sys/fs/cgroup/a"
~/balbir/cgroupstats # ./getdelays -C "/cgroup/a"
sleeping 1, blocked 0, running 1, stopped 0, uninterruptible 0
~/balbir/cgroupstats # ./getdelays -C "/sys/fs/cgroup"
~/balbir/cgroupstats # ./getdelays -C "/cgroup"
sleeping 155, blocked 0, running 1, stopped 0, uninterruptible 2
5 changes: 0 additions & 5 deletions trunk/Documentation/acpi/method-customizing.txt
Original file line number Diff line number Diff line change
Expand Up @@ -66,8 +66,3 @@ Note: We can use a kernel with multiple custom ACPI method running,
But each individual write to debugfs can implement a SINGLE
method override. i.e. if we want to insert/override multiple
ACPI methods, we need to redo step c) ~ g) for multiple times.

Note: Be aware that root can mis-use this driver to modify arbitrary
memory and gain additional rights, if root's privileges got
restricted (for example if root is not allowed to load additional
modules after boot).
31 changes: 14 additions & 17 deletions trunk/Documentation/cgroups/blkio-controller.txt
Original file line number Diff line number Diff line change
Expand Up @@ -28,19 +28,16 @@ cgroups. Here is what you can do.
- Enable group scheduling in CFQ
CONFIG_CFQ_GROUP_IOSCHED=y

- Compile and boot into kernel and mount IO controller (blkio); see
cgroups.txt, Why are cgroups needed?.
- Compile and boot into kernel and mount IO controller (blkio).

mount -t tmpfs cgroup_root /sys/fs/cgroup
mkdir /sys/fs/cgroup/blkio
mount -t cgroup -o blkio none /sys/fs/cgroup/blkio
mount -t cgroup -o blkio none /cgroup

- Create two cgroups
mkdir -p /sys/fs/cgroup/blkio/test1/ /sys/fs/cgroup/blkio/test2
mkdir -p /cgroup/test1/ /cgroup/test2

- Set weights of group test1 and test2
echo 1000 > /sys/fs/cgroup/blkio/test1/blkio.weight
echo 500 > /sys/fs/cgroup/blkio/test2/blkio.weight
echo 1000 > /cgroup/test1/blkio.weight
echo 500 > /cgroup/test2/blkio.weight

- Create two same size files (say 512MB each) on same disk (file1, file2) and
launch two dd threads in different cgroup to read those files.
Expand All @@ -49,12 +46,12 @@ cgroups. Here is what you can do.
echo 3 > /proc/sys/vm/drop_caches

dd if=/mnt/sdb/zerofile1 of=/dev/null &
echo $! > /sys/fs/cgroup/blkio/test1/tasks
cat /sys/fs/cgroup/blkio/test1/tasks
echo $! > /cgroup/test1/tasks
cat /cgroup/test1/tasks

dd if=/mnt/sdb/zerofile2 of=/dev/null &
echo $! > /sys/fs/cgroup/blkio/test2/tasks
cat /sys/fs/cgroup/blkio/test2/tasks
echo $! > /cgroup/test2/tasks
cat /cgroup/test2/tasks

- At macro level, first dd should finish first. To get more precise data, keep
on looking at (with the help of script), at blkio.disk_time and
Expand All @@ -71,13 +68,13 @@ Throttling/Upper Limit policy
- Enable throttling in block layer
CONFIG_BLK_DEV_THROTTLING=y

- Mount blkio controller (see cgroups.txt, Why are cgroups needed?)
mount -t cgroup -o blkio none /sys/fs/cgroup/blkio
- Mount blkio controller
mount -t cgroup -o blkio none /cgroup/blkio

- Specify a bandwidth rate on particular device for root group. The format
for policy is "<major>:<minor> <byes_per_second>".

echo "8:16 1048576" > /sys/fs/cgroup/blkio/blkio.read_bps_device
echo "8:16 1048576" > /cgroup/blkio/blkio.read_bps_device

Above will put a limit of 1MB/second on reads happening for root group
on device having major/minor number 8:16.
Expand Down Expand Up @@ -111,7 +108,7 @@ Hierarchical Cgroups
CFQ and throttling will practically treat all groups at same level.

pivot
/ / \ \
/ | \ \
root test1 test2 test3

Down the line we can implement hierarchical accounting/control support
Expand Down Expand Up @@ -152,7 +149,7 @@ Proportional weight policy files

Following is the format.

# echo dev_maj:dev_minor weight > blkio.weight_device
#echo dev_maj:dev_minor weight > /path/to/cgroup/blkio.weight_device
Configure weight=300 on /dev/sdb (8:16) in this cgroup
# echo 8:16 300 > blkio.weight_device
# cat blkio.weight_device
Expand Down
60 changes: 24 additions & 36 deletions trunk/Documentation/cgroups/cgroups.txt
Original file line number Diff line number Diff line change
Expand Up @@ -138,11 +138,11 @@ With the ability to classify tasks differently for different resources
the admin can easily set up a script which receives exec notifications
and depending on who is launching the browser he can

# echo browser_pid > /sys/fs/cgroup/<restype>/<userclass>/tasks
# echo browser_pid > /mnt/<restype>/<userclass>/tasks

With only a single hierarchy, he now would potentially have to create
a separate cgroup for every browser launched and associate it with
appropriate network and other resource class. This may lead to
approp network and other resource class. This may lead to
proliferation of such cgroups.

Also lets say that the administrator would like to give enhanced network
Expand All @@ -153,9 +153,9 @@ apps enhanced CPU power,
With ability to write pids directly to resource classes, it's just a
matter of :

# echo pid > /sys/fs/cgroup/network/<new_class>/tasks
# echo pid > /mnt/network/<new_class>/tasks
(after some time)
# echo pid > /sys/fs/cgroup/network/<orig_class>/tasks
# echo pid > /mnt/network/<orig_class>/tasks

Without this ability, he would have to split the cgroup into
multiple separate ones and then associate the new cgroups with the
Expand Down Expand Up @@ -310,24 +310,21 @@ subsystem, this is the case for the cpuset.
To start a new job that is to be contained within a cgroup, using
the "cpuset" cgroup subsystem, the steps are something like:

1) mount -t tmpfs cgroup_root /sys/fs/cgroup
2) mkdir /sys/fs/cgroup/cpuset
3) mount -t cgroup -ocpuset cpuset /sys/fs/cgroup/cpuset
4) Create the new cgroup by doing mkdir's and write's (or echo's) in
the /sys/fs/cgroup virtual file system.
5) Start a task that will be the "founding father" of the new job.
6) Attach that task to the new cgroup by writing its pid to the
/sys/fs/cgroup/cpuset/tasks file for that cgroup.
7) fork, exec or clone the job tasks from this founding father task.
1) mkdir /dev/cgroup
2) mount -t cgroup -ocpuset cpuset /dev/cgroup
3) Create the new cgroup by doing mkdir's and write's (or echo's) in
the /dev/cgroup virtual file system.
4) Start a task that will be the "founding father" of the new job.
5) Attach that task to the new cgroup by writing its pid to the
/dev/cgroup tasks file for that cgroup.
6) fork, exec or clone the job tasks from this founding father task.

For example, the following sequence of commands will setup a cgroup
named "Charlie", containing just CPUs 2 and 3, and Memory Node 1,
and then start a subshell 'sh' in that cgroup:

mount -t tmpfs cgroup_root /sys/fs/cgroup
mkdir /sys/fs/cgroup/cpuset
mount -t cgroup cpuset -ocpuset /sys/fs/cgroup/cpuset
cd /sys/fs/cgroup/cpuset
mount -t cgroup cpuset -ocpuset /dev/cgroup
cd /dev/cgroup
mkdir Charlie
cd Charlie
/bin/echo 2-3 > cpuset.cpus
Expand All @@ -348,7 +345,7 @@ Creating, modifying, using the cgroups can be done through the cgroup
virtual filesystem.

To mount a cgroup hierarchy with all available subsystems, type:
# mount -t cgroup xxx /sys/fs/cgroup
# mount -t cgroup xxx /dev/cgroup

The "xxx" is not interpreted by the cgroup code, but will appear in
/proc/mounts so may be any useful identifying string that you like.
Expand All @@ -357,32 +354,23 @@ Note: Some subsystems do not work without some user input first. For instance,
if cpusets are enabled the user will have to populate the cpus and mems files
for each new cgroup created before that group can be used.

As explained in section `1.2 Why are cgroups needed?' you should create
different hierarchies of cgroups for each single resource or group of
resources you want to control. Therefore, you should mount a tmpfs on
/sys/fs/cgroup and create directories for each cgroup resource or resource
group.

# mount -t tmpfs cgroup_root /sys/fs/cgroup
# mkdir /sys/fs/cgroup/rg1

To mount a cgroup hierarchy with just the cpuset and memory
subsystems, type:
# mount -t cgroup -o cpuset,memory hier1 /sys/fs/cgroup/rg1
# mount -t cgroup -o cpuset,memory hier1 /dev/cgroup

To change the set of subsystems bound to a mounted hierarchy, just
remount with different options:
# mount -o remount,cpuset,blkio hier1 /sys/fs/cgroup/rg1
# mount -o remount,cpuset,blkio hier1 /dev/cgroup

Now memory is removed from the hierarchy and blkio is added.

Note this will add blkio to the hierarchy but won't remove memory or
cpuset, because the new options are appended to the old ones:
# mount -o remount,blkio /sys/fs/cgroup/rg1
# mount -o remount,blkio /dev/cgroup

To Specify a hierarchy's release_agent:
# mount -t cgroup -o cpuset,release_agent="/sbin/cpuset_release_agent" \
xxx /sys/fs/cgroup/rg1
xxx /dev/cgroup

Note that specifying 'release_agent' more than once will return failure.

Expand All @@ -391,17 +379,17 @@ when the hierarchy consists of a single (root) cgroup. Supporting
the ability to arbitrarily bind/unbind subsystems from an existing
cgroup hierarchy is intended to be implemented in the future.

Then under /sys/fs/cgroup/rg1 you can find a tree that corresponds to the
tree of the cgroups in the system. For instance, /sys/fs/cgroup/rg1
Then under /dev/cgroup you can find a tree that corresponds to the
tree of the cgroups in the system. For instance, /dev/cgroup
is the cgroup that holds the whole system.

If you want to change the value of release_agent:
# echo "/sbin/new_release_agent" > /sys/fs/cgroup/rg1/release_agent
# echo "/sbin/new_release_agent" > /dev/cgroup/release_agent

It can also be changed via remount.

If you want to create a new cgroup under /sys/fs/cgroup/rg1:
# cd /sys/fs/cgroup/rg1
If you want to create a new cgroup under /dev/cgroup:
# cd /dev/cgroup
# mkdir my_cgroup

Now you want to do something with this cgroup.
Expand Down
21 changes: 11 additions & 10 deletions trunk/Documentation/cgroups/cpuacct.txt
Original file line number Diff line number Diff line change
Expand Up @@ -10,25 +10,26 @@ directly present in its group.

Accounting groups can be created by first mounting the cgroup filesystem.

# mount -t cgroup -ocpuacct none /sys/fs/cgroup

With the above step, the initial or the parent accounting group becomes
visible at /sys/fs/cgroup. At bootup, this group includes all the tasks in
the system. /sys/fs/cgroup/tasks lists the tasks in this cgroup.
/sys/fs/cgroup/cpuacct.usage gives the CPU time (in nanoseconds) obtained
by this group which is essentially the CPU time obtained by all the tasks
# mkdir /cgroups
# mount -t cgroup -ocpuacct none /cgroups

With the above step, the initial or the parent accounting group
becomes visible at /cgroups. At bootup, this group includes all the
tasks in the system. /cgroups/tasks lists the tasks in this cgroup.
/cgroups/cpuacct.usage gives the CPU time (in nanoseconds) obtained by
this group which is essentially the CPU time obtained by all the tasks
in the system.

New accounting groups can be created under the parent group /sys/fs/cgroup.
New accounting groups can be created under the parent group /cgroups.

# cd /sys/fs/cgroup
# cd /cgroups
# mkdir g1
# echo $$ > g1

The above steps create a new group g1 and move the current shell
process (bash) into it. CPU time consumed by this bash and its children
can be obtained from g1/cpuacct.usage and the same is accumulated in
/sys/fs/cgroup/cpuacct.usage also.
/cgroups/cpuacct.usage also.

cpuacct.stat file lists a few statistics which further divide the
CPU time obtained by the cgroup into user and system times. Currently
Expand Down
Loading

0 comments on commit ddeea77

Please sign in to comment.