Skip to content

Commit

Permalink
---
Browse files Browse the repository at this point in the history
yaml
---
r: 59225
b: refs/heads/master
c: cdf95c7
h: refs/heads/master
i:
  59223: d022c69
v: v3
  • Loading branch information
Andrew Victor authored and Russell King committed Jul 12, 2007
1 parent 3458b45 commit a441d9f
Show file tree
Hide file tree
Showing 1,844 changed files with 117,158 additions and 72,682 deletions.
2 changes: 1 addition & 1 deletion [refs]
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
---
refs/heads/master: bb50cbbd4beacd5ceda76c32fcb116c67fe8c66c
refs/heads/master: cdf95c73694e464cf9877cb5aa51df77f42815bc
16 changes: 0 additions & 16 deletions trunk/Documentation/ABI/removed/raw1394_legacy_isochronous

This file was deleted.

103 changes: 103 additions & 0 deletions trunk/Documentation/DMA-mapping.txt
Original file line number Diff line number Diff line change
Expand Up @@ -664,6 +664,109 @@ It is that simple.
Well, not for some odd devices. See the next section for information
about that.

DAC Addressing for Address Space Hungry Devices

There exists a class of devices which do not mesh well with the PCI
DMA mapping API. By definition these "mappings" are a finite
resource. The number of total available mappings per bus is platform
specific, but there will always be a reasonable amount.

What is "reasonable"? Reasonable means that networking and block I/O
devices need not worry about using too many mappings.

As an example of a problematic device, consider compute cluster cards.
They can potentially need to access gigabytes of memory at once via
DMA. Dynamic mappings are unsuitable for this kind of access pattern.

To this end we've provided a small API by which a device driver
may use DAC cycles to directly address all of physical memory.
Not all platforms support this, but most do. It is easy to determine
whether the platform will work properly at probe time.

First, understand that there may be a SEVERE performance penalty for
using these interfaces on some platforms. Therefore, you MUST only
use these interfaces if it is absolutely required. %99 of devices can
use the normal APIs without any problems.

Note that for streaming type mappings you must either use these
interfaces, or the dynamic mapping interfaces above. You may not mix
usage of both for the same device. Such an act is illegal and is
guaranteed to put a banana in your tailpipe.

However, consistent mappings may in fact be used in conjunction with
these interfaces. Remember that, as defined, consistent mappings are
always going to be SAC addressable.

The first thing your driver needs to do is query the PCI platform
layer if it is capable of handling your devices DAC addressing
capabilities:

int pci_dac_dma_supported(struct pci_dev *hwdev, u64 mask);

You may not use the following interfaces if this routine fails.

Next, DMA addresses using this API are kept track of using the
dma64_addr_t type. It is guaranteed to be big enough to hold any
DAC address the platform layer will give to you from the following
routines. If you have consistent mappings as well, you still
use plain dma_addr_t to keep track of those.

All mappings obtained here will be direct. The mappings are not
translated, and this is the purpose of this dialect of the DMA API.

All routines work with page/offset pairs. This is the _ONLY_ way to
portably refer to any piece of memory. If you have a cpu pointer
(which may be validly DMA'd too) you may easily obtain the page
and offset using something like this:

struct page *page = virt_to_page(ptr);
unsigned long offset = offset_in_page(ptr);

Here are the interfaces:

dma64_addr_t pci_dac_page_to_dma(struct pci_dev *pdev,
struct page *page,
unsigned long offset,
int direction);

The DAC address for the tuple PAGE/OFFSET are returned. The direction
argument is the same as for pci_{map,unmap}_single(). The same rules
for cpu/device access apply here as for the streaming mapping
interfaces. To reiterate:

The cpu may touch the buffer before pci_dac_page_to_dma.
The device may touch the buffer after pci_dac_page_to_dma
is made, but the cpu may NOT.

When the DMA transfer is complete, invoke:

void pci_dac_dma_sync_single_for_cpu(struct pci_dev *pdev,
dma64_addr_t dma_addr,
size_t len, int direction);

This must be done before the CPU looks at the buffer again.
This interface behaves identically to pci_dma_sync_{single,sg}_for_cpu().

And likewise, if you wish to let the device get back at the buffer after
the cpu has read/written it, invoke:

void pci_dac_dma_sync_single_for_device(struct pci_dev *pdev,
dma64_addr_t dma_addr,
size_t len, int direction);

before letting the device access the DMA area again.

If you need to get back to the PAGE/OFFSET tuple from a dma64_addr_t
the following interfaces are provided:

struct page *pci_dac_dma_to_page(struct pci_dev *pdev,
dma64_addr_t dma_addr);
unsigned long pci_dac_dma_to_offset(struct pci_dev *pdev,
dma64_addr_t dma_addr);

This is possible with the DAC interfaces purely because they are
not translated in any way.

Optimizing Unmap State Space Consumption

On many platforms, pci_unmap_{single,page}() is simply a nop.
Expand Down
66 changes: 0 additions & 66 deletions trunk/Documentation/DocBook/kernel-api.tmpl
Original file line number Diff line number Diff line change
Expand Up @@ -643,70 +643,4 @@ X!Idrivers/video/console/fonts.c
!Edrivers/spi/spi.c
</chapter>

<chapter id="i2c">
<title>I<superscript>2</superscript>C and SMBus Subsystem</title>

<para>
I<superscript>2</superscript>C (or without fancy typography, "I2C")
is an acronym for the "Inter-IC" bus, a simple bus protocol which is
widely used where low data rate communications suffice.
Since it's also a licensed trademark, some vendors use another
name (such as "Two-Wire Interface", TWI) for the same bus.
I2C only needs two signals (SCL for clock, SDA for data), conserving
board real estate and minimizing signal quality issues.
Most I2C devices use seven bit addresses, and bus speeds of up
to 400 kHz; there's a high speed extension (3.4 MHz) that's not yet
found wide use.
I2C is a multi-master bus; open drain signaling is used to
arbitrate between masters, as well as to handshake and to
synchronize clocks from slower clients.
</para>

<para>
The Linux I2C programming interfaces support only the master
side of bus interactions, not the slave side.
The programming interface is structured around two kinds of driver,
and two kinds of device.
An I2C "Adapter Driver" abstracts the controller hardware; it binds
to a physical device (perhaps a PCI device or platform_device) and
exposes a <structname>struct i2c_adapter</structname> representing
each I2C bus segment it manages.
On each I2C bus segment will be I2C devices represented by a
<structname>struct i2c_client</structname>. Those devices will
be bound to a <structname>struct i2c_driver</structname>,
which should follow the standard Linux driver model.
(At this writing, a legacy model is more widely used.)
There are functions to perform various I2C protocol operations; at
this writing all such functions are usable only from task context.
</para>

<para>
The System Management Bus (SMBus) is a sibling protocol. Most SMBus
systems are also I2C conformant. The electrical constraints are
tighter for SMBus, and it standardizes particular protocol messages
and idioms. Controllers that support I2C can also support most
SMBus operations, but SMBus controllers don't support all the protocol
options that an I2C controller will.
There are functions to perform various SMBus protocol operations,
either using I2C primitives or by issuing SMBus commands to
i2c_adapter devices which don't support those I2C operations.
</para>

!Iinclude/linux/i2c.h
!Fdrivers/i2c/i2c-boardinfo.c i2c_register_board_info
!Edrivers/i2c/i2c-core.c
</chapter>

<chapter id="splice">
<title>splice API</title>
<para>)
splice is a method for moving blocks of data around inside the
kernel, without continually transferring it between the kernel
and user space.
</para>
!Iinclude/linux/splice.h
!Ffs/splice.c
</chapter>


</book>
155 changes: 0 additions & 155 deletions trunk/Documentation/blackfin/kgdb.txt

This file was deleted.

16 changes: 13 additions & 3 deletions trunk/Documentation/block/barrier.txt
Original file line number Diff line number Diff line change
Expand Up @@ -82,23 +82,33 @@ including draining and flushing.
typedef void (prepare_flush_fn)(request_queue_t *q, struct request *rq);

int blk_queue_ordered(request_queue_t *q, unsigned ordered,
prepare_flush_fn *prepare_flush_fn);
prepare_flush_fn *prepare_flush_fn,
unsigned gfp_mask);

int blk_queue_ordered_locked(request_queue_t *q, unsigned ordered,
prepare_flush_fn *prepare_flush_fn,
unsigned gfp_mask);

The only difference between the two functions is whether or not the
caller is holding q->queue_lock on entry. The latter expects the
caller is holding the lock.

@q : the queue in question
@ordered : the ordered mode the driver/device supports
@prepare_flush_fn : this function should prepare @rq such that it
flushes cache to physical medium when executed
@gfp_mask : gfp_mask used when allocating data structures
for ordered processing

For example, SCSI disk driver's prepare_flush_fn looks like the
following.

static void sd_prepare_flush(request_queue_t *q, struct request *rq)
{
memset(rq->cmd, 0, sizeof(rq->cmd));
rq->cmd_type = REQ_TYPE_BLOCK_PC;
rq->flags |= REQ_BLOCK_PC;
rq->timeout = SD_TIMEOUT;
rq->cmd[0] = SYNCHRONIZE_CACHE;
rq->cmd_len = 10;
}

The following seven ordered modes are supported. The following table
Expand Down
Loading

0 comments on commit a441d9f

Please sign in to comment.