Skip to content

Commit

Permalink
---
Browse files Browse the repository at this point in the history
yaml
---
r: 142686
b: refs/heads/master
c: 132ea5e
h: refs/heads/master
v: v3
  • Loading branch information
Linus Torvalds committed Apr 7, 2009
1 parent f5eeae5 commit 48b68e1
Show file tree
Hide file tree
Showing 918 changed files with 88,593 additions and 35,285 deletions.
2 changes: 1 addition & 1 deletion [refs]
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
---
refs/heads/master: cae5a29d3c4ec7c4214966021c9ee827e66bd67b
refs/heads/master: 132ea5e9aa9ce13f62ba45db8e43ec887d1106e9
18 changes: 9 additions & 9 deletions trunk/Documentation/DMA-mapping.txt
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ exactly why.
The standard 32-bit addressing PCI device would do something like
this:

if (pci_set_dma_mask(pdev, DMA_32BIT_MASK)) {
if (pci_set_dma_mask(pdev, DMA_BIT_MASK(32))) {
printk(KERN_WARNING
"mydev: No suitable DMA available.\n");
goto ignore_this_device;
Expand All @@ -155,9 +155,9 @@ all 64-bits when accessing streaming DMA:

int using_dac;

if (!pci_set_dma_mask(pdev, DMA_64BIT_MASK)) {
if (!pci_set_dma_mask(pdev, DMA_BIT_MASK(64))) {
using_dac = 1;
} else if (!pci_set_dma_mask(pdev, DMA_32BIT_MASK)) {
} else if (!pci_set_dma_mask(pdev, DMA_BIT_MASK(32))) {
using_dac = 0;
} else {
printk(KERN_WARNING
Expand All @@ -170,14 +170,14 @@ the case would look like this:

int using_dac, consistent_using_dac;

if (!pci_set_dma_mask(pdev, DMA_64BIT_MASK)) {
if (!pci_set_dma_mask(pdev, DMA_BIT_MASK(64))) {
using_dac = 1;
consistent_using_dac = 1;
pci_set_consistent_dma_mask(pdev, DMA_64BIT_MASK);
} else if (!pci_set_dma_mask(pdev, DMA_32BIT_MASK)) {
pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64));
} else if (!pci_set_dma_mask(pdev, DMA_BIT_MASK(32))) {
using_dac = 0;
consistent_using_dac = 0;
pci_set_consistent_dma_mask(pdev, DMA_32BIT_MASK);
pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32));
} else {
printk(KERN_WARNING
"mydev: No suitable DMA available.\n");
Expand All @@ -192,7 +192,7 @@ check the return value from pci_set_consistent_dma_mask().
Finally, if your device can only drive the low 24-bits of
address during PCI bus mastering you might do something like:

if (pci_set_dma_mask(pdev, DMA_24BIT_MASK)) {
if (pci_set_dma_mask(pdev, DMA_BIT_MASK(24))) {
printk(KERN_WARNING
"mydev: 24-bit DMA addressing not available.\n");
goto ignore_this_device;
Expand All @@ -213,7 +213,7 @@ most specific mask.

Here is pseudo-code showing how this might be done:

#define PLAYBACK_ADDRESS_BITS DMA_32BIT_MASK
#define PLAYBACK_ADDRESS_BITS DMA_BIT_MASK(32)
#define RECORD_ADDRESS_BITS 0x00ffffff

struct my_sound_card *card;
Expand Down
2 changes: 1 addition & 1 deletion trunk/Documentation/DocBook/kernel-api.tmpl
Original file line number Diff line number Diff line change
Expand Up @@ -259,7 +259,7 @@ X!Earch/x86/kernel/mca_32.c
!Eblock/blk-tag.c
!Iblock/blk-tag.c
!Eblock/blk-integrity.c
!Iblock/blktrace.c
!Ikernel/trace/blktrace.c
!Iblock/genhd.c
!Eblock/genhd.c
</chapter>
Expand Down
8 changes: 4 additions & 4 deletions trunk/Documentation/DocBook/writing-an-alsa-driver.tmpl
Original file line number Diff line number Diff line change
Expand Up @@ -1137,8 +1137,8 @@
if (err < 0)
return err;
/* check PCI availability (28bit DMA) */
if (pci_set_dma_mask(pci, DMA_28BIT_MASK) < 0 ||
pci_set_consistent_dma_mask(pci, DMA_28BIT_MASK) < 0) {
if (pci_set_dma_mask(pci, DMA_BIT_MASK(28)) < 0 ||
pci_set_consistent_dma_mask(pci, DMA_BIT_MASK(28)) < 0) {
printk(KERN_ERR "error to set 28bit mask DMA\n");
pci_disable_device(pci);
return -ENXIO;
Expand Down Expand Up @@ -1252,8 +1252,8 @@
err = pci_enable_device(pci);
if (err < 0)
return err;
if (pci_set_dma_mask(pci, DMA_28BIT_MASK) < 0 ||
pci_set_consistent_dma_mask(pci, DMA_28BIT_MASK) < 0) {
if (pci_set_dma_mask(pci, DMA_BIT_MASK(28)) < 0 ||
pci_set_consistent_dma_mask(pci, DMA_BIT_MASK(28)) < 0) {
printk(KERN_ERR "error to set 28bit mask DMA\n");
pci_disable_device(pci);
return -ENXIO;
Expand Down
6 changes: 5 additions & 1 deletion trunk/Documentation/devices.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

Maintained by Alan Cox <device@lanana.org>

Last revised: 29 November 2006
Last revised: 6th April 2009

This list is the Linux Device List, the official registry of allocated
device numbers and /dev directory nodes for the Linux operating
Expand Down Expand Up @@ -2797,6 +2797,10 @@ Your cooperation is appreciated.
206 = /dev/ttySC1 SC26xx serial port 1
207 = /dev/ttySC2 SC26xx serial port 2
208 = /dev/ttySC3 SC26xx serial port 3
209 = /dev/ttyMAX0 MAX3100 serial port 0
210 = /dev/ttyMAX1 MAX3100 serial port 1
211 = /dev/ttyMAX2 MAX3100 serial port 2
212 = /dev/ttyMAX3 MAX3100 serial port 3

205 char Low-density serial ports (alternate device)
0 = /dev/culu0 Callout device for ttyLU0
Expand Down
7 changes: 4 additions & 3 deletions trunk/Documentation/fb/uvesafb.txt
Original file line number Diff line number Diff line change
Expand Up @@ -59,15 +59,16 @@ Accepted options:
ypan Enable display panning using the VESA protected mode
interface. The visible screen is just a window of the
video memory, console scrolling is done by changing the
start of the window. Available on x86 only.
start of the window. This option is available on x86
only and is the default option on that architecture.

ywrap Same as ypan, but assumes your gfx board can wrap-around
the video memory (i.e. starts reading from top if it
reaches the end of video memory). Faster than ypan.
Available on x86 only.

redraw Scroll by redrawing the affected part of the screen, this
is the safe (and slow) default.
is the default on non-x86.

(If you're using uvesafb as a module, the above three options are
used a parameter of the scroll option, e.g. scroll=ypan.)
Expand Down Expand Up @@ -182,7 +183,7 @@ from the Video BIOS if you set pixclock to 0 in fb_var_screeninfo.

--
Michal Januszewski <spock@gentoo.org>
Last updated: 2007-06-16
Last updated: 2009-03-30

Documentation of the uvesafb options is loosely based on vesafb.txt.

3 changes: 2 additions & 1 deletion trunk/Documentation/feature-removal-schedule.txt
Original file line number Diff line number Diff line change
Expand Up @@ -354,7 +354,8 @@ Who: Krzysztof Piotr Oledzki <ole@ans.pl>

---------------------------

What: i2c_attach_client(), i2c_detach_client(), i2c_driver->detach_client()
What: i2c_attach_client(), i2c_detach_client(), i2c_driver->detach_client(),
i2c_adapter->client_register(), i2c_adapter->client_unregister
When: 2.6.30
Check: i2c_attach_client i2c_detach_client
Why: Deprecated by the new (standard) device driver binding model. Use
Expand Down
2 changes: 2 additions & 0 deletions trunk/Documentation/filesystems/00-INDEX
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,8 @@ ncpfs.txt
- info on Novell Netware(tm) filesystem using NCP protocol.
nfsroot.txt
- short guide on setting up a diskless box with NFS root filesystem.
nilfs2.txt
- info and mount options for the NILFS2 filesystem.
ntfs.txt
- info and mount options for the NTFS filesystem (Windows NT).
ocfs2.txt
Expand Down
159 changes: 159 additions & 0 deletions trunk/Documentation/filesystems/knfsd-stats.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,159 @@

Kernel NFS Server Statistics
============================

This document describes the format and semantics of the statistics
which the kernel NFS server makes available to userspace. These
statistics are available in several text form pseudo files, each of
which is described separately below.

In most cases you don't need to know these formats, as the nfsstat(8)
program from the nfs-utils distribution provides a helpful command-line
interface for extracting and printing them.

All the files described here are formatted as a sequence of text lines,
separated by newline '\n' characters. Lines beginning with a hash
'#' character are comments intended for humans and should be ignored
by parsing routines. All other lines contain a sequence of fields
separated by whitespace.

/proc/fs/nfsd/pool_stats
------------------------

This file is available in kernels from 2.6.30 onwards, if the
/proc/fs/nfsd filesystem is mounted (it almost always should be).

The first line is a comment which describes the fields present in
all the other lines. The other lines present the following data as
a sequence of unsigned decimal numeric fields. One line is shown
for each NFS thread pool.

All counters are 64 bits wide and wrap naturally. There is no way
to zero these counters, instead applications should do their own
rate conversion.

pool
The id number of the NFS thread pool to which this line applies.
This number does not change.

Thread pool ids are a contiguous set of small integers starting
at zero. The maximum value depends on the thread pool mode, but
currently cannot be larger than the number of CPUs in the system.
Note that in the default case there will be a single thread pool
which contains all the nfsd threads and all the CPUs in the system,
and thus this file will have a single line with a pool id of "0".

packets-arrived
Counts how many NFS packets have arrived. More precisely, this
is the number of times that the network stack has notified the
sunrpc server layer that new data may be available on a transport
(e.g. an NFS or UDP socket or an NFS/RDMA endpoint).

Depending on the NFS workload patterns and various network stack
effects (such as Large Receive Offload) which can combine packets
on the wire, this may be either more or less than the number
of NFS calls received (which statistic is available elsewhere).
However this is a more accurate and less workload-dependent measure
of how much CPU load is being placed on the sunrpc server layer
due to NFS network traffic.

sockets-enqueued
Counts how many times an NFS transport is enqueued to wait for
an nfsd thread to service it, i.e. no nfsd thread was considered
available.

The circumstance this statistic tracks indicates that there was NFS
network-facing work to be done but it couldn't be done immediately,
thus introducing a small delay in servicing NFS calls. The ideal
rate of change for this counter is zero; significantly non-zero
values may indicate a performance limitation.

This can happen either because there are too few nfsd threads in the
thread pool for the NFS workload (the workload is thread-limited),
or because the NFS workload needs more CPU time than is available in
the thread pool (the workload is CPU-limited). In the former case,
configuring more nfsd threads will probably improve the performance
of the NFS workload. In the latter case, the sunrpc server layer is
already choosing not to wake idle nfsd threads because there are too
many nfsd threads which want to run but cannot, so configuring more
nfsd threads will make no difference whatsoever. The overloads-avoided
statistic (see below) can be used to distinguish these cases.

threads-woken
Counts how many times an idle nfsd thread is woken to try to
receive some data from an NFS transport.

This statistic tracks the circumstance where incoming
network-facing NFS work is being handled quickly, which is a good
thing. The ideal rate of change for this counter will be close
to but less than the rate of change of the packets-arrived counter.

overloads-avoided
Counts how many times the sunrpc server layer chose not to wake an
nfsd thread, despite the presence of idle nfsd threads, because
too many nfsd threads had been recently woken but could not get
enough CPU time to actually run.

This statistic counts a circumstance where the sunrpc layer
heuristically avoids overloading the CPU scheduler with too many
runnable nfsd threads. The ideal rate of change for this counter
is zero. Significant non-zero values indicate that the workload
is CPU limited. Usually this is associated with heavy CPU usage
on all the CPUs in the nfsd thread pool.

If a sustained large overloads-avoided rate is detected on a pool,
the top(1) utility should be used to check for the following
pattern of CPU usage on all the CPUs associated with the given
nfsd thread pool.

- %us ~= 0 (as you're *NOT* running applications on your NFS server)

- %wa ~= 0

- %id ~= 0

- %sy + %hi + %si ~= 100

If this pattern is seen, configuring more nfsd threads will *not*
improve the performance of the workload. If this patten is not
seen, then something more subtle is wrong.

threads-timedout
Counts how many times an nfsd thread triggered an idle timeout,
i.e. was not woken to handle any incoming network packets for
some time.

This statistic counts a circumstance where there are more nfsd
threads configured than can be used by the NFS workload. This is
a clue that the number of nfsd threads can be reduced without
affecting performance. Unfortunately, it's only a clue and not
a strong indication, for a couple of reasons:

- Currently the rate at which the counter is incremented is quite
slow; the idle timeout is 60 minutes. Unless the NFS workload
remains constant for hours at a time, this counter is unlikely
to be providing information that is still useful.

- It is usually a wise policy to provide some slack,
i.e. configure a few more nfsds than are currently needed,
to allow for future spikes in load.


Note that incoming packets on NFS transports will be dealt with in
one of three ways. An nfsd thread can be woken (threads-woken counts
this case), or the transport can be enqueued for later attention
(sockets-enqueued counts this case), or the packet can be temporarily
deferred because the transport is currently being used by an nfsd
thread. This last case is not very interesting and is not explicitly
counted, but can be inferred from the other counters thus:

packets-deferred = packets-arrived - ( sockets-enqueued + threads-woken )


More
----
Descriptions of the other statistics file should go here.


Greg Banks <gnb@sgi.com>
26 Mar 2009
Loading

0 comments on commit 48b68e1

Please sign in to comment.