Skip to content

Commit

Permalink
---
Browse files Browse the repository at this point in the history
yaml
---
r: 94871
b: refs/heads/master
c: 436c405
h: refs/heads/master
i:
  94869: 8ad7599
  94867: 60c574c
  94863: 7182296
v: v3
  • Loading branch information
Eric Paris authored and Al Viro committed Apr 28, 2008
1 parent e6bb89f commit c1620eb
Show file tree
Hide file tree
Showing 1,411 changed files with 21,563 additions and 36,447 deletions.
2 changes: 1 addition & 1 deletion [refs]
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
---
refs/heads/master: 97094dcf5cefc8ccfdf93839f54dac2c4d316165
refs/heads/master: 436c405c7d19455a71f42c9bec5fd5e028f1eb4e
69 changes: 2 additions & 67 deletions trunk/Documentation/DMA-API.txt
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,7 @@ Part Ic - DMA addressing limitations
int
dma_supported(struct device *dev, u64 mask)
int
pci_dma_supported(struct pci_dev *hwdev, u64 mask)
pci_dma_supported(struct device *dev, u64 mask)

Checks to see if the device can support DMA to the memory described by
mask.
Expand Down Expand Up @@ -189,7 +189,7 @@ dma_addr_t
dma_map_single(struct device *dev, void *cpu_addr, size_t size,
enum dma_data_direction direction)
dma_addr_t
pci_map_single(struct pci_dev *hwdev, void *cpu_addr, size_t size,
pci_map_single(struct device *dev, void *cpu_addr, size_t size,
int direction)

Maps a piece of processor virtual memory so it can be accessed by the
Expand Down Expand Up @@ -395,71 +395,6 @@ Notes: You must do this:

See also dma_map_single().

dma_addr_t
dma_map_single_attrs(struct device *dev, void *cpu_addr, size_t size,
enum dma_data_direction dir,
struct dma_attrs *attrs)

void
dma_unmap_single_attrs(struct device *dev, dma_addr_t dma_addr,
size_t size, enum dma_data_direction dir,
struct dma_attrs *attrs)

int
dma_map_sg_attrs(struct device *dev, struct scatterlist *sgl,
int nents, enum dma_data_direction dir,
struct dma_attrs *attrs)

void
dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sgl,
int nents, enum dma_data_direction dir,
struct dma_attrs *attrs)

The four functions above are just like the counterpart functions
without the _attrs suffixes, except that they pass an optional
struct dma_attrs*.

struct dma_attrs encapsulates a set of "dma attributes". For the
definition of struct dma_attrs see linux/dma-attrs.h.

The interpretation of dma attributes is architecture-specific, and
each attribute should be documented in Documentation/DMA-attributes.txt.

If struct dma_attrs* is NULL, the semantics of each of these
functions is identical to those of the corresponding function
without the _attrs suffix. As a result dma_map_single_attrs()
can generally replace dma_map_single(), etc.

As an example of the use of the *_attrs functions, here's how
you could pass an attribute DMA_ATTR_FOO when mapping memory
for DMA:

#include <linux/dma-attrs.h>
/* DMA_ATTR_FOO should be defined in linux/dma-attrs.h and
* documented in Documentation/DMA-attributes.txt */
...

DEFINE_DMA_ATTRS(attrs);
dma_set_attr(DMA_ATTR_FOO, &attrs);
....
n = dma_map_sg_attrs(dev, sg, nents, DMA_TO_DEVICE, &attr);
....

Architectures that care about DMA_ATTR_FOO would check for its
presence in their implementations of the mapping and unmapping
routines, e.g.:

void whizco_dma_map_sg_attrs(struct device *dev, dma_addr_t dma_addr,
size_t size, enum dma_data_direction dir,
struct dma_attrs *attrs)
{
....
int foo = dma_get_attr(DMA_ATTR_FOO, attrs);
....
if (foo)
/* twizzle the frobnozzle */
....


Part II - Advanced dma_ usage
-----------------------------
Expand Down
24 changes: 0 additions & 24 deletions trunk/Documentation/DMA-attributes.txt

This file was deleted.

38 changes: 19 additions & 19 deletions trunk/Documentation/DMA-mapping.txt
Original file line number Diff line number Diff line change
Expand Up @@ -315,11 +315,11 @@ you should do:

dma_addr_t dma_handle;

cpu_addr = pci_alloc_consistent(pdev, size, &dma_handle);
cpu_addr = pci_alloc_consistent(dev, size, &dma_handle);

where pdev is a struct pci_dev *. This may be called in interrupt context.
You should use dma_alloc_coherent (see DMA-API.txt) for buses
where devices don't have struct pci_dev (like ISA, EISA).
where dev is a struct pci_dev *. You should pass NULL for PCI like buses
where devices don't have struct pci_dev (like ISA, EISA). This may be
called in interrupt context.

This argument is needed because the DMA translations may be bus
specific (and often is private to the bus which the device is attached
Expand All @@ -332,7 +332,7 @@ __get_free_pages (but takes size instead of a page order). If your
driver needs regions sized smaller than a page, you may prefer using
the pci_pool interface, described below.

The consistent DMA mapping interfaces, for non-NULL pdev, will by
The consistent DMA mapping interfaces, for non-NULL dev, will by
default return a DMA address which is SAC (Single Address Cycle)
addressable. Even if the device indicates (via PCI dma mask) that it
may address the upper 32-bits and thus perform DAC cycles, consistent
Expand All @@ -354,9 +354,9 @@ buffer you receive will not cross a 64K boundary.

To unmap and free such a DMA region, you call:

pci_free_consistent(pdev, size, cpu_addr, dma_handle);
pci_free_consistent(dev, size, cpu_addr, dma_handle);

where pdev, size are the same as in the above call and cpu_addr and
where dev, size are the same as in the above call and cpu_addr and
dma_handle are the values pci_alloc_consistent returned to you.
This function may not be called in interrupt context.

Expand All @@ -371,9 +371,9 @@ Create a pci_pool like this:

struct pci_pool *pool;

pool = pci_pool_create(name, pdev, size, align, alloc);
pool = pci_pool_create(name, dev, size, align, alloc);

The "name" is for diagnostics (like a kmem_cache name); pdev and size
The "name" is for diagnostics (like a kmem_cache name); dev and size
are as above. The device's hardware alignment requirement for this
type of data is "align" (which is expressed in bytes, and must be a
power of two). If your device has no boundary crossing restrictions,
Expand Down Expand Up @@ -472,11 +472,11 @@ To map a single region, you do:
void *addr = buffer->ptr;
size_t size = buffer->len;

dma_handle = pci_map_single(pdev, addr, size, direction);
dma_handle = pci_map_single(dev, addr, size, direction);

and to unmap it:

pci_unmap_single(pdev, dma_handle, size, direction);
pci_unmap_single(dev, dma_handle, size, direction);

You should call pci_unmap_single when the DMA activity is finished, e.g.
from the interrupt which told you that the DMA transfer is done.
Expand All @@ -493,17 +493,17 @@ Specifically:
unsigned long offset = buffer->offset;
size_t size = buffer->len;

dma_handle = pci_map_page(pdev, page, offset, size, direction);
dma_handle = pci_map_page(dev, page, offset, size, direction);

...

pci_unmap_page(pdev, dma_handle, size, direction);
pci_unmap_page(dev, dma_handle, size, direction);

Here, "offset" means byte offset within the given page.

With scatterlists, you map a region gathered from several regions by:

int i, count = pci_map_sg(pdev, sglist, nents, direction);
int i, count = pci_map_sg(dev, sglist, nents, direction);
struct scatterlist *sg;

for_each_sg(sglist, sg, count, i) {
Expand All @@ -527,7 +527,7 @@ accessed sg->address and sg->length as shown above.

To unmap a scatterlist, just call:

pci_unmap_sg(pdev, sglist, nents, direction);
pci_unmap_sg(dev, sglist, nents, direction);

Again, make sure DMA activity has already finished.

Expand All @@ -550,19 +550,19 @@ correct copy of the DMA buffer.
So, firstly, just map it with pci_map_{single,sg}, and after each DMA
transfer call either:

pci_dma_sync_single_for_cpu(pdev, dma_handle, size, direction);
pci_dma_sync_single_for_cpu(dev, dma_handle, size, direction);

or:

pci_dma_sync_sg_for_cpu(pdev, sglist, nents, direction);
pci_dma_sync_sg_for_cpu(dev, sglist, nents, direction);

as appropriate.

Then, if you wish to let the device get at the DMA area again,
finish accessing the data with the cpu, and then before actually
giving the buffer to the hardware call either:

pci_dma_sync_single_for_device(pdev, dma_handle, size, direction);
pci_dma_sync_single_for_device(dev, dma_handle, size, direction);

or:

Expand Down Expand Up @@ -739,7 +739,7 @@ failure can be determined by:

dma_addr_t dma_handle;

dma_handle = pci_map_single(pdev, addr, size, direction);
dma_handle = pci_map_single(dev, addr, size, direction);
if (pci_dma_mapping_error(dma_handle)) {
/*
* reduce current DMA mapping usage,
Expand Down
56 changes: 1 addition & 55 deletions trunk/Documentation/DocBook/kernel-api.tmpl
Original file line number Diff line number Diff line change
Expand Up @@ -119,7 +119,7 @@ X!Ilib/string.c
!Elib/string.c
</sect1>
<sect1><title>Bit Operations</title>
!Iinclude/asm-x86/bitops.h
!Iinclude/asm-x86/bitops_32.h
</sect1>
</chapter>

Expand Down Expand Up @@ -645,58 +645,4 @@ X!Idrivers/video/console/fonts.c
!Edrivers/i2c/i2c-core.c
</chapter>

<chapter id="clk">
<title>Clock Framework</title>

<para>
The clock framework defines programming interfaces to support
software management of the system clock tree.
This framework is widely used with System-On-Chip (SOC) platforms
to support power management and various devices which may need
custom clock rates.
Note that these "clocks" don't relate to timekeeping or real
time clocks (RTCs), each of which have separate frameworks.
These <structname>struct clk</structname> instances may be used
to manage for example a 96 MHz signal that is used to shift bits
into and out of peripherals or busses, or otherwise trigger
synchronous state machine transitions in system hardware.
</para>

<para>
Power management is supported by explicit software clock gating:
unused clocks are disabled, so the system doesn't waste power
changing the state of transistors that aren't in active use.
On some systems this may be backed by hardware clock gating,
where clocks are gated without being disabled in software.
Sections of chips that are powered but not clocked may be able
to retain their last state.
This low power state is often called a <emphasis>retention
mode</emphasis>.
This mode still incurs leakage currents, especially with finer
circuit geometries, but for CMOS circuits power is mostly used
by clocked state changes.
</para>

<para>
Power-aware drivers only enable their clocks when the device
they manage is in active use. Also, system sleep states often
differ according to which clock domains are active: while a
"standby" state may allow wakeup from several active domains, a
"mem" (suspend-to-RAM) state may require a more wholesale shutdown
of clocks derived from higher speed PLLs and oscillators, limiting
the number of possible wakeup event sources. A driver's suspend
method may need to be aware of system-specific clock constraints
on the target sleep state.
</para>

<para>
Some platforms support programmable clock generators. These
can be used by external chips of various kinds, such as other
CPUs, multimedia codecs, and devices with strict requirements
for interface clocking.
</para>

!Iinclude/linux/clk.h
</chapter>

</book>
3 changes: 2 additions & 1 deletion trunk/Documentation/cgroups.txt
Original file line number Diff line number Diff line change
Expand Up @@ -500,7 +500,8 @@ post-attachment activity that requires memory allocations or blocking.

void fork(struct cgroup_subsy *ss, struct task_struct *task)

Called when a task is forked into a cgroup.
Called when a task is forked into a cgroup. Also called during
registration for all existing tasks.

void exit(struct cgroup_subsys *ss, struct task_struct *task)

Expand Down
48 changes: 0 additions & 48 deletions trunk/Documentation/controllers/devices.txt

This file was deleted.

Loading

0 comments on commit c1620eb

Please sign in to comment.