Skip to content

Commit

Permalink
---
Browse files Browse the repository at this point in the history
yaml
---
r: 94993
b: refs/heads/master
c: 42173f6
h: refs/heads/master
i:
  94991: d8c2759
v: v3
  • Loading branch information
Christoph Hellwig authored and Lachlan McIlroy committed Apr 29, 2008
1 parent 13a2367 commit ef6b9ee
Show file tree
Hide file tree
Showing 1,020 changed files with 11,134 additions and 29,971 deletions.
2 changes: 1 addition & 1 deletion [refs]
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
---
refs/heads/master: a94a630a4c69430bb4562ab8252104449bba9a67
refs/heads/master: 42173f6860af7e016a950a9a19a66679cfc46d98
69 changes: 2 additions & 67 deletions trunk/Documentation/DMA-API.txt
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,7 @@ Part Ic - DMA addressing limitations
int
dma_supported(struct device *dev, u64 mask)
int
pci_dma_supported(struct pci_dev *hwdev, u64 mask)
pci_dma_supported(struct device *dev, u64 mask)

Checks to see if the device can support DMA to the memory described by
mask.
Expand Down Expand Up @@ -189,7 +189,7 @@ dma_addr_t
dma_map_single(struct device *dev, void *cpu_addr, size_t size,
enum dma_data_direction direction)
dma_addr_t
pci_map_single(struct pci_dev *hwdev, void *cpu_addr, size_t size,
pci_map_single(struct device *dev, void *cpu_addr, size_t size,
int direction)

Maps a piece of processor virtual memory so it can be accessed by the
Expand Down Expand Up @@ -395,71 +395,6 @@ Notes: You must do this:

See also dma_map_single().

dma_addr_t
dma_map_single_attrs(struct device *dev, void *cpu_addr, size_t size,
enum dma_data_direction dir,
struct dma_attrs *attrs)

void
dma_unmap_single_attrs(struct device *dev, dma_addr_t dma_addr,
size_t size, enum dma_data_direction dir,
struct dma_attrs *attrs)

int
dma_map_sg_attrs(struct device *dev, struct scatterlist *sgl,
int nents, enum dma_data_direction dir,
struct dma_attrs *attrs)

void
dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sgl,
int nents, enum dma_data_direction dir,
struct dma_attrs *attrs)

The four functions above are just like the counterpart functions
without the _attrs suffixes, except that they pass an optional
struct dma_attrs*.

struct dma_attrs encapsulates a set of "dma attributes". For the
definition of struct dma_attrs see linux/dma-attrs.h.

The interpretation of dma attributes is architecture-specific, and
each attribute should be documented in Documentation/DMA-attributes.txt.

If struct dma_attrs* is NULL, the semantics of each of these
functions is identical to those of the corresponding function
without the _attrs suffix. As a result dma_map_single_attrs()
can generally replace dma_map_single(), etc.

As an example of the use of the *_attrs functions, here's how
you could pass an attribute DMA_ATTR_FOO when mapping memory
for DMA:

#include <linux/dma-attrs.h>
/* DMA_ATTR_FOO should be defined in linux/dma-attrs.h and
* documented in Documentation/DMA-attributes.txt */
...

DEFINE_DMA_ATTRS(attrs);
dma_set_attr(DMA_ATTR_FOO, &attrs);
....
n = dma_map_sg_attrs(dev, sg, nents, DMA_TO_DEVICE, &attr);
....

Architectures that care about DMA_ATTR_FOO would check for its
presence in their implementations of the mapping and unmapping
routines, e.g.:

void whizco_dma_map_sg_attrs(struct device *dev, dma_addr_t dma_addr,
size_t size, enum dma_data_direction dir,
struct dma_attrs *attrs)
{
....
int foo = dma_get_attr(DMA_ATTR_FOO, attrs);
....
if (foo)
/* twizzle the frobnozzle */
....


Part II - Advanced dma_ usage
-----------------------------
Expand Down
24 changes: 0 additions & 24 deletions trunk/Documentation/DMA-attributes.txt

This file was deleted.

38 changes: 19 additions & 19 deletions trunk/Documentation/DMA-mapping.txt
Original file line number Diff line number Diff line change
Expand Up @@ -315,11 +315,11 @@ you should do:

dma_addr_t dma_handle;

cpu_addr = pci_alloc_consistent(pdev, size, &dma_handle);
cpu_addr = pci_alloc_consistent(dev, size, &dma_handle);

where pdev is a struct pci_dev *. This may be called in interrupt context.
You should use dma_alloc_coherent (see DMA-API.txt) for buses
where devices don't have struct pci_dev (like ISA, EISA).
where dev is a struct pci_dev *. You should pass NULL for PCI like buses
where devices don't have struct pci_dev (like ISA, EISA). This may be
called in interrupt context.

This argument is needed because the DMA translations may be bus
specific (and often is private to the bus which the device is attached
Expand All @@ -332,7 +332,7 @@ __get_free_pages (but takes size instead of a page order). If your
driver needs regions sized smaller than a page, you may prefer using
the pci_pool interface, described below.

The consistent DMA mapping interfaces, for non-NULL pdev, will by
The consistent DMA mapping interfaces, for non-NULL dev, will by
default return a DMA address which is SAC (Single Address Cycle)
addressable. Even if the device indicates (via PCI dma mask) that it
may address the upper 32-bits and thus perform DAC cycles, consistent
Expand All @@ -354,9 +354,9 @@ buffer you receive will not cross a 64K boundary.

To unmap and free such a DMA region, you call:

pci_free_consistent(pdev, size, cpu_addr, dma_handle);
pci_free_consistent(dev, size, cpu_addr, dma_handle);

where pdev, size are the same as in the above call and cpu_addr and
where dev, size are the same as in the above call and cpu_addr and
dma_handle are the values pci_alloc_consistent returned to you.
This function may not be called in interrupt context.

Expand All @@ -371,9 +371,9 @@ Create a pci_pool like this:

struct pci_pool *pool;

pool = pci_pool_create(name, pdev, size, align, alloc);
pool = pci_pool_create(name, dev, size, align, alloc);

The "name" is for diagnostics (like a kmem_cache name); pdev and size
The "name" is for diagnostics (like a kmem_cache name); dev and size
are as above. The device's hardware alignment requirement for this
type of data is "align" (which is expressed in bytes, and must be a
power of two). If your device has no boundary crossing restrictions,
Expand Down Expand Up @@ -472,11 +472,11 @@ To map a single region, you do:
void *addr = buffer->ptr;
size_t size = buffer->len;

dma_handle = pci_map_single(pdev, addr, size, direction);
dma_handle = pci_map_single(dev, addr, size, direction);

and to unmap it:

pci_unmap_single(pdev, dma_handle, size, direction);
pci_unmap_single(dev, dma_handle, size, direction);

You should call pci_unmap_single when the DMA activity is finished, e.g.
from the interrupt which told you that the DMA transfer is done.
Expand All @@ -493,17 +493,17 @@ Specifically:
unsigned long offset = buffer->offset;
size_t size = buffer->len;

dma_handle = pci_map_page(pdev, page, offset, size, direction);
dma_handle = pci_map_page(dev, page, offset, size, direction);

...

pci_unmap_page(pdev, dma_handle, size, direction);
pci_unmap_page(dev, dma_handle, size, direction);

Here, "offset" means byte offset within the given page.

With scatterlists, you map a region gathered from several regions by:

int i, count = pci_map_sg(pdev, sglist, nents, direction);
int i, count = pci_map_sg(dev, sglist, nents, direction);
struct scatterlist *sg;

for_each_sg(sglist, sg, count, i) {
Expand All @@ -527,7 +527,7 @@ accessed sg->address and sg->length as shown above.

To unmap a scatterlist, just call:

pci_unmap_sg(pdev, sglist, nents, direction);
pci_unmap_sg(dev, sglist, nents, direction);

Again, make sure DMA activity has already finished.

Expand All @@ -550,19 +550,19 @@ correct copy of the DMA buffer.
So, firstly, just map it with pci_map_{single,sg}, and after each DMA
transfer call either:

pci_dma_sync_single_for_cpu(pdev, dma_handle, size, direction);
pci_dma_sync_single_for_cpu(dev, dma_handle, size, direction);

or:

pci_dma_sync_sg_for_cpu(pdev, sglist, nents, direction);
pci_dma_sync_sg_for_cpu(dev, sglist, nents, direction);

as appropriate.

Then, if you wish to let the device get at the DMA area again,
finish accessing the data with the cpu, and then before actually
giving the buffer to the hardware call either:

pci_dma_sync_single_for_device(pdev, dma_handle, size, direction);
pci_dma_sync_single_for_device(dev, dma_handle, size, direction);

or:

Expand Down Expand Up @@ -739,7 +739,7 @@ failure can be determined by:

dma_addr_t dma_handle;

dma_handle = pci_map_single(pdev, addr, size, direction);
dma_handle = pci_map_single(dev, addr, size, direction);
if (pci_dma_mapping_error(dma_handle)) {
/*
* reduce current DMA mapping usage,
Expand Down
3 changes: 2 additions & 1 deletion trunk/Documentation/cgroups.txt
Original file line number Diff line number Diff line change
Expand Up @@ -500,7 +500,8 @@ post-attachment activity that requires memory allocations or blocking.

void fork(struct cgroup_subsy *ss, struct task_struct *task)

Called when a task is forked into a cgroup.
Called when a task is forked into a cgroup. Also called during
registration for all existing tasks.

void exit(struct cgroup_subsys *ss, struct task_struct *task)

Expand Down
48 changes: 0 additions & 48 deletions trunk/Documentation/controllers/devices.txt

This file was deleted.

Loading

0 comments on commit ef6b9ee

Please sign in to comment.