diff --git a/[refs] b/[refs] index 0d023d345c9f..472505468904 100644 --- a/[refs] +++ b/[refs] @@ -1,2 +1,2 @@ --- -refs/heads/master: 553ee5dc1a7a1fb04a6286b0c779481f7035bbd1 +refs/heads/master: 86dca4f8e6ab1fd8a3fb5838163fc9d7990f416e diff --git a/trunk/Documentation/feature-removal-schedule.txt b/trunk/Documentation/feature-removal-schedule.txt index 495858b236b6..59d0c74c79c9 100644 --- a/trunk/Documentation/feature-removal-schedule.txt +++ b/trunk/Documentation/feature-removal-schedule.txt @@ -127,13 +127,6 @@ Who: Christoph Hellwig --------------------------- -What: EXPORT_SYMBOL(lookup_hash) -When: January 2006 -Why: Too low-level interface. Use lookup_one_len or lookup_create instead. -Who: Christoph Hellwig - ---------------------------- - What: CONFIG_FORCED_INLINING When: June 2006 Why: Config option is there to see if gcc is good enough. (in january @@ -241,3 +234,15 @@ Why: The USB subsystem has changed a lot over time, and it has been Who: Greg Kroah-Hartman --------------------------- + +What: find_trylock_page +When: January 2007 +Why: The interface no longer has any callers left in the kernel. It + is an odd interface (compared with other find_*_page functions), in + that it does not take a refcount to the page, only the page lock. + It should be replaced with find_get_page or find_lock_page if possible. + This feature removal can be reevaluated if users of the interface + cannot cleanly use something else. +Who: Nick Piggin + +--------------------------- diff --git a/trunk/Documentation/input/joystick-parport.txt b/trunk/Documentation/input/joystick-parport.txt index 88a011c9f985..d537c48cc6d0 100644 --- a/trunk/Documentation/input/joystick-parport.txt +++ b/trunk/Documentation/input/joystick-parport.txt @@ -36,12 +36,12 @@ with them. All NES and SNES use the same synchronous serial protocol, clocked from the computer's side (and thus timing insensitive). To allow up to 5 NES -and/or SNES gamepads connected to the parallel port at once, the output -lines of the parallel port are shared, while one of 5 available input lines -is assigned to each gamepad. +and/or SNES gamepads and/or SNES mice connected to the parallel port at once, +the output lines of the parallel port are shared, while one of 5 available +input lines is assigned to each gamepad. This protocol is handled by the gamecon.c driver, so that's the one -you'll use for NES and SNES gamepads. +you'll use for NES, SNES gamepads and SNES mice. The main problem with PC parallel ports is that they don't have +5V power source on any of their pins. So, if you want a reliable source of power @@ -106,7 +106,7 @@ A, Turbo B, Select and Start, and is connected through 5 wires, then it is either a NES or NES clone and will work with this connection. SNES gamepads also use 5 wires, but have more buttons. They will work as well, of course. -Pinout for NES gamepads Pinout for SNES gamepads +Pinout for NES gamepads Pinout for SNES gamepads and mice +----> Power +-----------------------\ | 7 | o o o o | x x o | 1 @@ -454,6 +454,7 @@ uses the following kernel/module command line: 6 | N64 pad 7 | Sony PSX controller 8 | Sony PSX DDR controller + 9 | SNES mouse The exact type of the PSX controller type is autoprobed when used so hot swapping should work (but is not recomended). diff --git a/trunk/Documentation/leds-class.txt b/trunk/Documentation/leds-class.txt new file mode 100644 index 000000000000..8c35c0426110 --- /dev/null +++ b/trunk/Documentation/leds-class.txt @@ -0,0 +1,71 @@ +LED handling under Linux +======================== + +If you're reading this and thinking about keyboard leds, these are +handled by the input subsystem and the led class is *not* needed. + +In its simplest form, the LED class just allows control of LEDs from +userspace. LEDs appear in /sys/class/leds/. The brightness file will +set the brightness of the LED (taking a value 0-255). Most LEDs don't +have hardware brightness support so will just be turned on for non-zero +brightness settings. + +The class also introduces the optional concept of an LED trigger. A trigger +is a kernel based source of led events. Triggers can either be simple or +complex. A simple trigger isn't configurable and is designed to slot into +existing subsystems with minimal additional code. Examples are the ide-disk, +nand-disk and sharpsl-charge triggers. With led triggers disabled, the code +optimises away. + +Complex triggers whilst available to all LEDs have LED specific +parameters and work on a per LED basis. The timer trigger is an example. + +You can change triggers in a similar manner to the way an IO scheduler +is chosen (via /sys/class/leds//trigger). Trigger specific +parameters can appear in /sys/class/leds/ once a given trigger is +selected. + + +Design Philosophy +================= + +The underlying design philosophy is simplicity. LEDs are simple devices +and the aim is to keep a small amount of code giving as much functionality +as possible. Please keep this in mind when suggesting enhancements. + + +LED Device Naming +================= + +Is currently of the form: + +"devicename:colour" + +There have been calls for LED properties such as colour to be exported as +individual led class attributes. As a solution which doesn't incur as much +overhead, I suggest these become part of the device name. The naming scheme +above leaves scope for further attributes should they be needed. + + +Known Issues +============ + +The LED Trigger core cannot be a module as the simple trigger functions +would cause nightmare dependency issues. I see this as a minor issue +compared to the benefits the simple trigger functionality brings. The +rest of the LED subsystem can be modular. + +Some leds can be programmed to flash in hardware. As this isn't a generic +LED device property, this should be exported as a device specific sysfs +attribute rather than part of the class if this functionality is required. + + +Future Development +================== + +At the moment, a trigger can't be created specifically for a single LED. +There are a number of cases where a trigger might only be mappable to a +particular LED (ACPI?). The addition of triggers provided by the LED driver +should cover this option and be possible to add without breaking the +current interface. + diff --git a/trunk/Documentation/memory-barriers.txt b/trunk/Documentation/memory-barriers.txt new file mode 100644 index 000000000000..f8550310a6d5 --- /dev/null +++ b/trunk/Documentation/memory-barriers.txt @@ -0,0 +1,1913 @@ + ============================ + LINUX KERNEL MEMORY BARRIERS + ============================ + +By: David Howells + +Contents: + + (*) Abstract memory access model. + + - Device operations. + - Guarantees. + + (*) What are memory barriers? + + - Varieties of memory barrier. + - What may not be assumed about memory barriers? + - Data dependency barriers. + - Control dependencies. + - SMP barrier pairing. + - Examples of memory barrier sequences. + + (*) Explicit kernel barriers. + + - Compiler barrier. + - The CPU memory barriers. + - MMIO write barrier. + + (*) Implicit kernel memory barriers. + + - Locking functions. + - Interrupt disabling functions. + - Miscellaneous functions. + + (*) Inter-CPU locking barrier effects. + + - Locks vs memory accesses. + - Locks vs I/O accesses. + + (*) Where are memory barriers needed? + + - Interprocessor interaction. + - Atomic operations. + - Accessing devices. + - Interrupts. + + (*) Kernel I/O barrier effects. + + (*) Assumed minimum execution ordering model. + + (*) The effects of the cpu cache. + + - Cache coherency. + - Cache coherency vs DMA. + - Cache coherency vs MMIO. + + (*) The things CPUs get up to. + + - And then there's the Alpha. + + (*) References. + + +============================ +ABSTRACT MEMORY ACCESS MODEL +============================ + +Consider the following abstract model of the system: + + : : + : : + : : + +-------+ : +--------+ : +-------+ + | | : | | : | | + | | : | | : | | + | CPU 1 |<----->| Memory |<----->| CPU 2 | + | | : | | : | | + | | : | | : | | + +-------+ : +--------+ : +-------+ + ^ : ^ : ^ + | : | : | + | : | : | + | : v : | + | : +--------+ : | + | : | | : | + | : | | : | + +---------->| Device |<----------+ + : | | : + : | | : + : +--------+ : + : : + +Each CPU executes a program that generates memory access operations. In the +abstract CPU, memory operation ordering is very relaxed, and a CPU may actually +perform the memory operations in any order it likes, provided program causality +appears to be maintained. Similarly, the compiler may also arrange the +instructions it emits in any order it likes, provided it doesn't affect the +apparent operation of the program. + +So in the above diagram, the effects of the memory operations performed by a +CPU are perceived by the rest of the system as the operations cross the +interface between the CPU and rest of the system (the dotted lines). + + +For example, consider the following sequence of events: + + CPU 1 CPU 2 + =============== =============== + { A == 1; B == 2 } + A = 3; x = A; + B = 4; y = B; + +The set of accesses as seen by the memory system in the middle can be arranged +in 24 different combinations: + + STORE A=3, STORE B=4, x=LOAD A->3, y=LOAD B->4 + STORE A=3, STORE B=4, y=LOAD B->4, x=LOAD A->3 + STORE A=3, x=LOAD A->3, STORE B=4, y=LOAD B->4 + STORE A=3, x=LOAD A->3, y=LOAD B->2, STORE B=4 + STORE A=3, y=LOAD B->2, STORE B=4, x=LOAD A->3 + STORE A=3, y=LOAD B->2, x=LOAD A->3, STORE B=4 + STORE B=4, STORE A=3, x=LOAD A->3, y=LOAD B->4 + STORE B=4, ... + ... + +and can thus result in four different combinations of values: + + x == 1, y == 2 + x == 1, y == 4 + x == 3, y == 2 + x == 3, y == 4 + + +Furthermore, the stores committed by a CPU to the memory system may not be +perceived by the loads made by another CPU in the same order as the stores were +committed. + + +As a further example, consider this sequence of events: + + CPU 1 CPU 2 + =============== =============== + { A == 1, B == 2, C = 3, P == &A, Q == &C } + B = 4; Q = P; + P = &B D = *Q; + +There is an obvious data dependency here, as the value loaded into D depends on +the address retrieved from P by CPU 2. At the end of the sequence, any of the +following results are possible: + + (Q == &A) and (D == 1) + (Q == &B) and (D == 2) + (Q == &B) and (D == 4) + +Note that CPU 2 will never try and load C into D because the CPU will load P +into Q before issuing the load of *Q. + + +DEVICE OPERATIONS +----------------- + +Some devices present their control interfaces as collections of memory +locations, but the order in which the control registers are accessed is very +important. For instance, imagine an ethernet card with a set of internal +registers that are accessed through an address port register (A) and a data +port register (D). To read internal register 5, the following code might then +be used: + + *A = 5; + x = *D; + +but this might show up as either of the following two sequences: + + STORE *A = 5, x = LOAD *D + x = LOAD *D, STORE *A = 5 + +the second of which will almost certainly result in a malfunction, since it set +the address _after_ attempting to read the register. + + +GUARANTEES +---------- + +There are some minimal guarantees that may be expected of a CPU: + + (*) On any given CPU, dependent memory accesses will be issued in order, with + respect to itself. This means that for: + + Q = P; D = *Q; + + the CPU will issue the following memory operations: + + Q = LOAD P, D = LOAD *Q + + and always in that order. + + (*) Overlapping loads and stores within a particular CPU will appear to be + ordered within that CPU. This means that for: + + a = *X; *X = b; + + the CPU will only issue the following sequence of memory operations: + + a = LOAD *X, STORE *X = b + + And for: + + *X = c; d = *X; + + the CPU will only issue: + + STORE *X = c, d = LOAD *X + + (Loads and stores overlap if they are targetted at overlapping pieces of + memory). + +And there are a number of things that _must_ or _must_not_ be assumed: + + (*) It _must_not_ be assumed that independent loads and stores will be issued + in the order given. This means that for: + + X = *A; Y = *B; *D = Z; + + we may get any of the following sequences: + + X = LOAD *A, Y = LOAD *B, STORE *D = Z + X = LOAD *A, STORE *D = Z, Y = LOAD *B + Y = LOAD *B, X = LOAD *A, STORE *D = Z + Y = LOAD *B, STORE *D = Z, X = LOAD *A + STORE *D = Z, X = LOAD *A, Y = LOAD *B + STORE *D = Z, Y = LOAD *B, X = LOAD *A + + (*) It _must_ be assumed that overlapping memory accesses may be merged or + discarded. This means that for: + + X = *A; Y = *(A + 4); + + we may get any one of the following sequences: + + X = LOAD *A; Y = LOAD *(A + 4); + Y = LOAD *(A + 4); X = LOAD *A; + {X, Y} = LOAD {*A, *(A + 4) }; + + And for: + + *A = X; Y = *A; + + we may get either of: + + STORE *A = X; Y = LOAD *A; + STORE *A = Y; + + +========================= +WHAT ARE MEMORY BARRIERS? +========================= + +As can be seen above, independent memory operations are effectively performed +in random order, but this can be a problem for CPU-CPU interaction and for I/O. +What is required is some way of intervening to instruct the compiler and the +CPU to restrict the order. + +Memory barriers are such interventions. They impose a perceived partial +ordering between the memory operations specified on either side of the barrier. +They request that the sequence of memory events generated appears to other +parts of the system as if the barrier is effective on that CPU. + + +VARIETIES OF MEMORY BARRIER +--------------------------- + +Memory barriers come in four basic varieties: + + (1) Write (or store) memory barriers. + + A write memory barrier gives a guarantee that all the STORE operations + specified before the barrier will appear to happen before all the STORE + operations specified after the barrier with respect to the other + components of the system. + + A write barrier is a partial ordering on stores only; it is not required + to have any effect on loads. + + A CPU can be viewed as as commiting a sequence of store operations to the + memory system as time progresses. All stores before a write barrier will + occur in the sequence _before_ all the stores after the write barrier. + + [!] Note that write barriers should normally be paired with read or data + dependency barriers; see the "SMP barrier pairing" subsection. + + + (2) Data dependency barriers. + + A data dependency barrier is a weaker form of read barrier. In the case + where two loads are performed such that the second depends on the result + of the first (eg: the first load retrieves the address to which the second + load will be directed), a data dependency barrier would be required to + make sure that the target of the second load is updated before the address + obtained by the first load is accessed. + + A data dependency barrier is a partial ordering on interdependent loads + only; it is not required to have any effect on stores, independent loads + or overlapping loads. + + As mentioned in (1), the other CPUs in the system can be viewed as + committing sequences of stores to the memory system that the CPU being + considered can then perceive. A data dependency barrier issued by the CPU + under consideration guarantees that for any load preceding it, if that + load touches one of a sequence of stores from another CPU, then by the + time the barrier completes, the effects of all the stores prior to that + touched by the load will be perceptible to any loads issued after the data + dependency barrier. + + See the "Examples of memory barrier sequences" subsection for diagrams + showing the ordering constraints. + + [!] Note that the first load really has to have a _data_ dependency and + not a control dependency. If the address for the second load is dependent + on the first load, but the dependency is through a conditional rather than + actually loading the address itself, then it's a _control_ dependency and + a full read barrier or better is required. See the "Control dependencies" + subsection for more information. + + [!] Note that data dependency barriers should normally be paired with + write barriers; see the "SMP barrier pairing" subsection. + + + (3) Read (or load) memory barriers. + + A read barrier is a data dependency barrier plus a guarantee that all the + LOAD operations specified before the barrier will appear to happen before + all the LOAD operations specified after the barrier with respect to the + other components of the system. + + A read barrier is a partial ordering on loads only; it is not required to + have any effect on stores. + + Read memory barriers imply data dependency barriers, and so can substitute + for them. + + [!] Note that read barriers should normally be paired with write barriers; + see the "SMP barrier pairing" subsection. + + + (4) General memory barriers. + + A general memory barrier is a combination of both a read memory barrier + and a write memory barrier. It is a partial ordering over both loads and + stores. + + General memory barriers imply both read and write memory barriers, and so + can substitute for either. + + +And a couple of implicit varieties: + + (5) LOCK operations. + + This acts as a one-way permeable barrier. It guarantees that all memory + operations after the LOCK operation will appear to happen after the LOCK + operation with respect to the other components of the system. + + Memory operations that occur before a LOCK operation may appear to happen + after it completes. + + A LOCK operation should almost always be paired with an UNLOCK operation. + + + (6) UNLOCK operations. + + This also acts as a one-way permeable barrier. It guarantees that all + memory operations before the UNLOCK operation will appear to happen before + the UNLOCK operation with respect to the other components of the system. + + Memory operations that occur after an UNLOCK operation may appear to + happen before it completes. + + LOCK and UNLOCK operations are guaranteed to appear with respect to each + other strictly in the order specified. + + The use of LOCK and UNLOCK operations generally precludes the need for + other sorts of memory barrier (but note the exceptions mentioned in the + subsection "MMIO write barrier"). + + +Memory barriers are only required where there's a possibility of interaction +between two CPUs or between a CPU and a device. If it can be guaranteed that +there won't be any such interaction in any particular piece of code, then +memory barriers are unnecessary in that piece of code. + + +Note that these are the _minimum_ guarantees. Different architectures may give +more substantial guarantees, but they may _not_ be relied upon outside of arch +specific code. + + +WHAT MAY NOT BE ASSUMED ABOUT MEMORY BARRIERS? +---------------------------------------------- + +There are certain things that the Linux kernel memory barriers do not guarantee: + + (*) There is no guarantee that any of the memory accesses specified before a + memory barrier will be _complete_ by the completion of a memory barrier + instruction; the barrier can be considered to draw a line in that CPU's + access queue that accesses of the appropriate type may not cross. + + (*) There is no guarantee that issuing a memory barrier on one CPU will have + any direct effect on another CPU or any other hardware in the system. The + indirect effect will be the order in which the second CPU sees the effects + of the first CPU's accesses occur, but see the next point: + + (*) There is no guarantee that the a CPU will see the correct order of effects + from a second CPU's accesses, even _if_ the second CPU uses a memory + barrier, unless the first CPU _also_ uses a matching memory barrier (see + the subsection on "SMP Barrier Pairing"). + + (*) There is no guarantee that some intervening piece of off-the-CPU + hardware[*] will not reorder the memory accesses. CPU cache coherency + mechanisms should propagate the indirect effects of a memory barrier + between CPUs, but might not do so in order. + + [*] For information on bus mastering DMA and coherency please read: + + Documentation/pci.txt + Documentation/DMA-mapping.txt + Documentation/DMA-API.txt + + +DATA DEPENDENCY BARRIERS +------------------------ + +The usage requirements of data dependency barriers are a little subtle, and +it's not always obvious that they're needed. To illustrate, consider the +following sequence of events: + + CPU 1 CPU 2 + =============== =============== + { A == 1, B == 2, C = 3, P == &A, Q == &C } + B = 4; + + P = &B + Q = P; + D = *Q; + +There's a clear data dependency here, and it would seem that by the end of the +sequence, Q must be either &A or &B, and that: + + (Q == &A) implies (D == 1) + (Q == &B) implies (D == 4) + +But! CPU 2's perception of P may be updated _before_ its perception of B, thus +leading to the following situation: + + (Q == &B) and (D == 2) ???? + +Whilst this may seem like a failure of coherency or causality maintenance, it +isn't, and this behaviour can be observed on certain real CPUs (such as the DEC +Alpha). + +To deal with this, a data dependency barrier must be inserted between the +address load and the data load: + + CPU 1 CPU 2 + =============== =============== + { A == 1, B == 2, C = 3, P == &A, Q == &C } + B = 4; + + P = &B + Q = P; + + D = *Q; + +This enforces the occurrence of one of the two implications, and prevents the +third possibility from arising. + +[!] Note that this extremely counterintuitive situation arises most easily on +machines with split caches, so that, for example, one cache bank processes +even-numbered cache lines and the other bank processes odd-numbered cache +lines. The pointer P might be stored in an odd-numbered cache line, and the +variable B might be stored in an even-numbered cache line. Then, if the +even-numbered bank of the reading CPU's cache is extremely busy while the +odd-numbered bank is idle, one can see the new value of the pointer P (&B), +but the old value of the variable B (1). + + +Another example of where data dependency barriers might by required is where a +number is read from memory and then used to calculate the index for an array +access: + + CPU 1 CPU 2 + =============== =============== + { M[0] == 1, M[1] == 2, M[3] = 3, P == 0, Q == 3 } + M[1] = 4; + + P = 1 + Q = P; + + D = M[Q]; + + +The data dependency barrier is very important to the RCU system, for example. +See rcu_dereference() in include/linux/rcupdate.h. This permits the current +target of an RCU'd pointer to be replaced with a new modified target, without +the replacement target appearing to be incompletely initialised. + +See also the subsection on "Cache Coherency" for a more thorough example. + + +CONTROL DEPENDENCIES +-------------------- + +A control dependency requires a full read memory barrier, not simply a data +dependency barrier to make it work correctly. Consider the following bit of +code: + + q = &a; + if (p) + q = &b; + + x = *q; + +This will not have the desired effect because there is no actual data +dependency, but rather a control dependency that the CPU may short-circuit by +attempting to predict the outcome in advance. In such a case what's actually +required is: + + q = &a; + if (p) + q = &b; + + x = *q; + + +SMP BARRIER PAIRING +------------------- + +When dealing with CPU-CPU interactions, certain types of memory barrier should +always be paired. A lack of appropriate pairing is almost certainly an error. + +A write barrier should always be paired with a data dependency barrier or read +barrier, though a general barrier would also be viable. Similarly a read +barrier or a data dependency barrier should always be paired with at least an +write barrier, though, again, a general barrier is viable: + + CPU 1 CPU 2 + =============== =============== + a = 1; + + b = 2; x = a; + + y = b; + +Or: + + CPU 1 CPU 2 + =============== =============================== + a = 1; + + b = &a; x = b; + + y = *x; + +Basically, the read barrier always has to be there, even though it can be of +the "weaker" type. + + +EXAMPLES OF MEMORY BARRIER SEQUENCES +------------------------------------ + +Firstly, write barriers act as a partial orderings on store operations. +Consider the following sequence of events: + + CPU 1 + ======================= + STORE A = 1 + STORE B = 2 + STORE C = 3 + + STORE D = 4 + STORE E = 5 + +This sequence of events is committed to the memory coherence system in an order +that the rest of the system might perceive as the unordered set of { STORE A, +STORE B, STORE C } all occuring before the unordered set of { STORE D, STORE E +}: + + +-------+ : : + | | +------+ + | |------>| C=3 | } /\ + | | : +------+ }----- \ -----> Events perceptible + | | : | A=1 | } \/ to rest of system + | | : +------+ } + | CPU 1 | : | B=2 | } + | | +------+ } + | | wwwwwwwwwwwwwwww } <--- At this point the write barrier + | | +------+ } requires all stores prior to the + | | : | E=5 | } barrier to be committed before + | | : +------+ } further stores may be take place. + | |------>| D=4 | } + | | +------+ + +-------+ : : + | + | Sequence in which stores committed to memory system + | by CPU 1 + V + + +Secondly, data dependency barriers act as a partial orderings on data-dependent +loads. Consider the following sequence of events: + + CPU 1 CPU 2 + ======================= ======================= + STORE A = 1 + STORE B = 2 + + STORE C = &B LOAD X + STORE D = 4 LOAD C (gets &B) + LOAD *C (reads B) + +Without intervention, CPU 2 may perceive the events on CPU 1 in some +effectively random order, despite the write barrier issued by CPU 1: + + +-------+ : : : : + | | +------+ +-------+ | Sequence of update + | |------>| B=2 |----- --->| Y->8 | | of perception on + | | : +------+ \ +-------+ | CPU 2 + | CPU 1 | : | A=1 | \ --->| C->&Y | V + | | +------+ | +-------+ + | | wwwwwwwwwwwwwwww | : : + | | +------+ | : : + | | : | C=&B |--- | : : +-------+ + | | : +------+ \ | +-------+ | | + | |------>| D=4 | ----------->| C->&B |------>| | + | | +------+ | +-------+ | | + +-------+ : : | : : | | + | : : | | + | : : | CPU 2 | + | +-------+ | | + Apparently incorrect ---> | | B->7 |------>| | + perception of B (!) | +-------+ | | + | : : | | + | +-------+ | | + The load of X holds ---> \ | X->9 |------>| | + up the maintenance \ +-------+ | | + of coherence of B ----->| B->2 | +-------+ + +-------+ + : : + + +In the above example, CPU 2 perceives that B is 7, despite the load of *C +(which would be B) coming after the the LOAD of C. + +If, however, a data dependency barrier were to be placed between the load of C +and the load of *C (ie: B) on CPU 2, then the following will occur: + + +-------+ : : : : + | | +------+ +-------+ + | |------>| B=2 |----- --->| Y->8 | + | | : +------+ \ +-------+ + | CPU 1 | : | A=1 | \ --->| C->&Y | + | | +------+ | +-------+ + | | wwwwwwwwwwwwwwww | : : + | | +------+ | : : + | | : | C=&B |--- | : : +-------+ + | | : +------+ \ | +-------+ | | + | |------>| D=4 | ----------->| C->&B |------>| | + | | +------+ | +-------+ | | + +-------+ : : | : : | | + | : : | | + | : : | CPU 2 | + | +-------+ | | + \ | X->9 |------>| | + \ +-------+ | | + ----->| B->2 | | | + +-------+ | | + Makes sure all effects ---> ddddddddddddddddd | | + prior to the store of C +-------+ | | + are perceptible to | B->2 |------>| | + successive loads +-------+ | | + : : +-------+ + + +And thirdly, a read barrier acts as a partial order on loads. Consider the +following sequence of events: + + CPU 1 CPU 2 + ======================= ======================= + STORE A=1 + STORE B=2 + STORE C=3 + + STORE D=4 + STORE E=5 + LOAD A + LOAD B + LOAD C + LOAD D + LOAD E + +Without intervention, CPU 2 may then choose to perceive the events on CPU 1 in +some effectively random order, despite the write barrier issued by CPU 1: + + +-------+ : : + | | +------+ + | |------>| C=3 | } + | | : +------+ } + | | : | A=1 | } + | | : +------+ } + | CPU 1 | : | B=2 | }--- + | | +------+ } \ + | | wwwwwwwwwwwww} \ + | | +------+ } \ : : +-------+ + | | : | E=5 | } \ +-------+ | | + | | : +------+ } \ { | C->3 |------>| | + | |------>| D=4 | } \ { +-------+ : | | + | | +------+ \ { | E->5 | : | | + +-------+ : : \ { +-------+ : | | + Transfer -->{ | A->1 | : | CPU 2 | + from CPU 1 { +-------+ : | | + to CPU 2 { | D->4 | : | | + { +-------+ : | | + { | B->2 |------>| | + +-------+ | | + : : +-------+ + + +If, however, a read barrier were to be placed between the load of C and the +load of D on CPU 2, then the partial ordering imposed by CPU 1 will be +perceived correctly by CPU 2. + + +-------+ : : + | | +------+ + | |------>| C=3 | } + | | : +------+ } + | | : | A=1 | }--- + | | : +------+ } \ + | CPU 1 | : | B=2 | } \ + | | +------+ \ + | | wwwwwwwwwwwwwwww \ + | | +------+ \ : : +-------+ + | | : | E=5 | } \ +-------+ | | + | | : +------+ }--- \ { | C->3 |------>| | + | |------>| D=4 | } \ \ { +-------+ : | | + | | +------+ \ -->{ | B->2 | : | | + +-------+ : : \ { +-------+ : | | + \ { | A->1 | : | CPU 2 | + \ +-------+ | | + At this point the read ----> \ rrrrrrrrrrrrrrrrr | | + barrier causes all effects \ +-------+ | | + prior to the storage of C \ { | E->5 | : | | + to be perceptible to CPU 2 -->{ +-------+ : | | + { | D->4 |------>| | + +-------+ | | + : : +-------+ + + +======================== +EXPLICIT KERNEL BARRIERS +======================== + +The Linux kernel has a variety of different barriers that act at different +levels: + + (*) Compiler barrier. + + (*) CPU memory barriers. + + (*) MMIO write barrier. + + +COMPILER BARRIER +---------------- + +The Linux kernel has an explicit compiler barrier function that prevents the +compiler from moving the memory accesses either side of it to the other side: + + barrier(); + +This a general barrier - lesser varieties of compiler barrier do not exist. + +The compiler barrier has no direct effect on the CPU, which may then reorder +things however it wishes. + + +CPU MEMORY BARRIERS +------------------- + +The Linux kernel has eight basic CPU memory barriers: + + TYPE MANDATORY SMP CONDITIONAL + =============== ======================= =========================== + GENERAL mb() smp_mb() + WRITE wmb() smp_wmb() + READ rmb() smp_rmb() + DATA DEPENDENCY read_barrier_depends() smp_read_barrier_depends() + + +All CPU memory barriers unconditionally imply compiler barriers. + +SMP memory barriers are reduced to compiler barriers on uniprocessor compiled +systems because it is assumed that a CPU will be appear to be self-consistent, +and will order overlapping accesses correctly with respect to itself. + +[!] Note that SMP memory barriers _must_ be used to control the ordering of +references to shared memory on SMP systems, though the use of locking instead +is sufficient. + +Mandatory barriers should not be used to control SMP effects, since mandatory +barriers unnecessarily impose overhead on UP systems. They may, however, be +used to control MMIO effects on accesses through relaxed memory I/O windows. +These are required even on non-SMP systems as they affect the order in which +memory operations appear to a device by prohibiting both the compiler and the +CPU from reordering them. + + +There are some more advanced barrier functions: + + (*) set_mb(var, value) + (*) set_wmb(var, value) + + These assign the value to the variable and then insert at least a write + barrier after it, depending on the function. They aren't guaranteed to + insert anything more than a compiler barrier in a UP compilation. + + + (*) smp_mb__before_atomic_dec(); + (*) smp_mb__after_atomic_dec(); + (*) smp_mb__before_atomic_inc(); + (*) smp_mb__after_atomic_inc(); + + These are for use with atomic add, subtract, increment and decrement + functions, especially when used for reference counting. These functions + do not imply memory barriers. + + As an example, consider a piece of code that marks an object as being dead + and then decrements the object's reference count: + + obj->dead = 1; + smp_mb__before_atomic_dec(); + atomic_dec(&obj->ref_count); + + This makes sure that the death mark on the object is perceived to be set + *before* the reference counter is decremented. + + See Documentation/atomic_ops.txt for more information. See the "Atomic + operations" subsection for information on where to use these. + + + (*) smp_mb__before_clear_bit(void); + (*) smp_mb__after_clear_bit(void); + + These are for use similar to the atomic inc/dec barriers. These are + typically used for bitwise unlocking operations, so care must be taken as + there are no implicit memory barriers here either. + + Consider implementing an unlock operation of some nature by clearing a + locking bit. The clear_bit() would then need to be barriered like this: + + smp_mb__before_clear_bit(); + clear_bit( ... ); + + This prevents memory operations before the clear leaking to after it. See + the subsection on "Locking Functions" with reference to UNLOCK operation + implications. + + See Documentation/atomic_ops.txt for more information. See the "Atomic + operations" subsection for information on where to use these. + + +MMIO WRITE BARRIER +------------------ + +The Linux kernel also has a special barrier for use with memory-mapped I/O +writes: + + mmiowb(); + +This is a variation on the mandatory write barrier that causes writes to weakly +ordered I/O regions to be partially ordered. Its effects may go beyond the +CPU->Hardware interface and actually affect the hardware at some level. + +See the subsection "Locks vs I/O accesses" for more information. + + +=============================== +IMPLICIT KERNEL MEMORY BARRIERS +=============================== + +Some of the other functions in the linux kernel imply memory barriers, amongst +which are locking, scheduling and memory allocation functions. + +This specification is a _minimum_ guarantee; any particular architecture may +provide more substantial guarantees, but these may not be relied upon outside +of arch specific code. + + +LOCKING FUNCTIONS +----------------- + +The Linux kernel has a number of locking constructs: + + (*) spin locks + (*) R/W spin locks + (*) mutexes + (*) semaphores + (*) R/W semaphores + (*) RCU + +In all cases there are variants on "LOCK" operations and "UNLOCK" operations +for each construct. These operations all imply certain barriers: + + (1) LOCK operation implication: + + Memory operations issued after the LOCK will be completed after the LOCK + operation has completed. + + Memory operations issued before the LOCK may be completed after the LOCK + operation has completed. + + (2) UNLOCK operation implication: + + Memory operations issued before the UNLOCK will be completed before the + UNLOCK operation has completed. + + Memory operations issued after the UNLOCK may be completed before the + UNLOCK operation has completed. + + (3) LOCK vs LOCK implication: + + All LOCK operations issued before another LOCK operation will be completed + before that LOCK operation. + + (4) LOCK vs UNLOCK implication: + + All LOCK operations issued before an UNLOCK operation will be completed + before the UNLOCK operation. + + All UNLOCK operations issued before a LOCK operation will be completed + before the LOCK operation. + + (5) Failed conditional LOCK implication: + + Certain variants of the LOCK operation may fail, either due to being + unable to get the lock immediately, or due to receiving an unblocked + signal whilst asleep waiting for the lock to become available. Failed + locks do not imply any sort of barrier. + +Therefore, from (1), (2) and (4) an UNLOCK followed by an unconditional LOCK is +equivalent to a full barrier, but a LOCK followed by an UNLOCK is not. + +[!] Note: one of the consequence of LOCKs and UNLOCKs being only one-way + barriers is that the effects instructions outside of a critical section may + seep into the inside of the critical section. + +Locks and semaphores may not provide any guarantee of ordering on UP compiled +systems, and so cannot be counted on in such a situation to actually achieve +anything at all - especially with respect to I/O accesses - unless combined +with interrupt disabling operations. + +See also the section on "Inter-CPU locking barrier effects". + + +As an example, consider the following: + + *A = a; + *B = b; + LOCK + *C = c; + *D = d; + UNLOCK + *E = e; + *F = f; + +The following sequence of events is acceptable: + + LOCK, {*F,*A}, *E, {*C,*D}, *B, UNLOCK + + [+] Note that {*F,*A} indicates a combined access. + +But none of the following are: + + {*F,*A}, *B, LOCK, *C, *D, UNLOCK, *E + *A, *B, *C, LOCK, *D, UNLOCK, *E, *F + *A, *B, LOCK, *C, UNLOCK, *D, *E, *F + *B, LOCK, *C, *D, UNLOCK, {*F,*A}, *E + + + +INTERRUPT DISABLING FUNCTIONS +----------------------------- + +Functions that disable interrupts (LOCK equivalent) and enable interrupts +(UNLOCK equivalent) will act as compiler barriers only. So if memory or I/O +barriers are required in such a situation, they must be provided from some +other means. + + +MISCELLANEOUS FUNCTIONS +----------------------- + +Other functions that imply barriers: + + (*) schedule() and similar imply full memory barriers. + + (*) Memory allocation and release functions imply full memory barriers. + + +================================= +INTER-CPU LOCKING BARRIER EFFECTS +================================= + +On SMP systems locking primitives give a more substantial form of barrier: one +that does affect memory access ordering on other CPUs, within the context of +conflict on any particular lock. + + +LOCKS VS MEMORY ACCESSES +------------------------ + +Consider the following: the system has a pair of spinlocks (N) and (Q), and +three CPUs; then should the following sequence of events occur: + + CPU 1 CPU 2 + =============================== =============================== + *A = a; *E = e; + LOCK M LOCK Q + *B = b; *F = f; + *C = c; *G = g; + UNLOCK M UNLOCK Q + *D = d; *H = h; + +Then there is no guarantee as to what order CPU #3 will see the accesses to *A +through *H occur in, other than the constraints imposed by the separate locks +on the separate CPUs. It might, for example, see: + + *E, LOCK M, LOCK Q, *G, *C, *F, *A, *B, UNLOCK Q, *D, *H, UNLOCK M + +But it won't see any of: + + *B, *C or *D preceding LOCK M + *A, *B or *C following UNLOCK M + *F, *G or *H preceding LOCK Q + *E, *F or *G following UNLOCK Q + + +However, if the following occurs: + + CPU 1 CPU 2 + =============================== =============================== + *A = a; + LOCK M [1] + *B = b; + *C = c; + UNLOCK M [1] + *D = d; *E = e; + LOCK M [2] + *F = f; + *G = g; + UNLOCK M [2] + *H = h; + +CPU #3 might see: + + *E, LOCK M [1], *C, *B, *A, UNLOCK M [1], + LOCK M [2], *H, *F, *G, UNLOCK M [2], *D + +But assuming CPU #1 gets the lock first, it won't see any of: + + *B, *C, *D, *F, *G or *H preceding LOCK M [1] + *A, *B or *C following UNLOCK M [1] + *F, *G or *H preceding LOCK M [2] + *A, *B, *C, *E, *F or *G following UNLOCK M [2] + + +LOCKS VS I/O ACCESSES +--------------------- + +Under certain circumstances (especially involving NUMA), I/O accesses within +two spinlocked sections on two different CPUs may be seen as interleaved by the +PCI bridge, because the PCI bridge does not necessarily participate in the +cache-coherence protocol, and is therefore incapable of issuing the required +read memory barriers. + +For example: + + CPU 1 CPU 2 + =============================== =============================== + spin_lock(Q) + writel(0, ADDR) + writel(1, DATA); + spin_unlock(Q); + spin_lock(Q); + writel(4, ADDR); + writel(5, DATA); + spin_unlock(Q); + +may be seen by the PCI bridge as follows: + + STORE *ADDR = 0, STORE *ADDR = 4, STORE *DATA = 1, STORE *DATA = 5 + +which would probably cause the hardware to malfunction. + + +What is necessary here is to intervene with an mmiowb() before dropping the +spinlock, for example: + + CPU 1 CPU 2 + =============================== =============================== + spin_lock(Q) + writel(0, ADDR) + writel(1, DATA); + mmiowb(); + spin_unlock(Q); + spin_lock(Q); + writel(4, ADDR); + writel(5, DATA); + mmiowb(); + spin_unlock(Q); + +this will ensure that the two stores issued on CPU #1 appear at the PCI bridge +before either of the stores issued on CPU #2. + + +Furthermore, following a store by a load to the same device obviates the need +for an mmiowb(), because the load forces the store to complete before the load +is performed: + + CPU 1 CPU 2 + =============================== =============================== + spin_lock(Q) + writel(0, ADDR) + a = readl(DATA); + spin_unlock(Q); + spin_lock(Q); + writel(4, ADDR); + b = readl(DATA); + spin_unlock(Q); + + +See Documentation/DocBook/deviceiobook.tmpl for more information. + + +================================= +WHERE ARE MEMORY BARRIERS NEEDED? +================================= + +Under normal operation, memory operation reordering is generally not going to +be a problem as a single-threaded linear piece of code will still appear to +work correctly, even if it's in an SMP kernel. There are, however, three +circumstances in which reordering definitely _could_ be a problem: + + (*) Interprocessor interaction. + + (*) Atomic operations. + + (*) Accessing devices (I/O). + + (*) Interrupts. + + +INTERPROCESSOR INTERACTION +-------------------------- + +When there's a system with more than one processor, more than one CPU in the +system may be working on the same data set at the same time. This can cause +synchronisation problems, and the usual way of dealing with them is to use +locks. Locks, however, are quite expensive, and so it may be preferable to +operate without the use of a lock if at all possible. In such a case +operations that affect both CPUs may have to be carefully ordered to prevent +a malfunction. + +Consider, for example, the R/W semaphore slow path. Here a waiting process is +queued on the semaphore, by virtue of it having a piece of its stack linked to +the semaphore's list of waiting processes: + + struct rw_semaphore { + ... + spinlock_t lock; + struct list_head waiters; + }; + + struct rwsem_waiter { + struct list_head list; + struct task_struct *task; + }; + +To wake up a particular waiter, the up_read() or up_write() functions have to: + + (1) read the next pointer from this waiter's record to know as to where the + next waiter record is; + + (4) read the pointer to the waiter's task structure; + + (3) clear the task pointer to tell the waiter it has been given the semaphore; + + (4) call wake_up_process() on the task; and + + (5) release the reference held on the waiter's task struct. + +In otherwords, it has to perform this sequence of events: + + LOAD waiter->list.next; + LOAD waiter->task; + STORE waiter->task; + CALL wakeup + RELEASE task + +and if any of these steps occur out of order, then the whole thing may +malfunction. + +Once it has queued itself and dropped the semaphore lock, the waiter does not +get the lock again; it instead just waits for its task pointer to be cleared +before proceeding. Since the record is on the waiter's stack, this means that +if the task pointer is cleared _before_ the next pointer in the list is read, +another CPU might start processing the waiter and might clobber the waiter's +stack before the up*() function has a chance to read the next pointer. + +Consider then what might happen to the above sequence of events: + + CPU 1 CPU 2 + =============================== =============================== + down_xxx() + Queue waiter + Sleep + up_yyy() + LOAD waiter->task; + STORE waiter->task; + Woken up by other event + + Resume processing + down_xxx() returns + call foo() + foo() clobbers *waiter + + LOAD waiter->list.next; + --- OOPS --- + +This could be dealt with using the semaphore lock, but then the down_xxx() +function has to needlessly get the spinlock again after being woken up. + +The way to deal with this is to insert a general SMP memory barrier: + + LOAD waiter->list.next; + LOAD waiter->task; + smp_mb(); + STORE waiter->task; + CALL wakeup + RELEASE task + +In this case, the barrier makes a guarantee that all memory accesses before the +barrier will appear to happen before all the memory accesses after the barrier +with respect to the other CPUs on the system. It does _not_ guarantee that all +the memory accesses before the barrier will be complete by the time the barrier +instruction itself is complete. + +On a UP system - where this wouldn't be a problem - the smp_mb() is just a +compiler barrier, thus making sure the compiler emits the instructions in the +right order without actually intervening in the CPU. Since there there's only +one CPU, that CPU's dependency ordering logic will take care of everything +else. + + +ATOMIC OPERATIONS +----------------- + +Though they are technically interprocessor interaction considerations, atomic +operations are noted specially as they do _not_ generally imply memory +barriers. The possible offenders include: + + xchg(); + cmpxchg(); + test_and_set_bit(); + test_and_clear_bit(); + test_and_change_bit(); + atomic_cmpxchg(); + atomic_inc_return(); + atomic_dec_return(); + atomic_add_return(); + atomic_sub_return(); + atomic_inc_and_test(); + atomic_dec_and_test(); + atomic_sub_and_test(); + atomic_add_negative(); + atomic_add_unless(); + +These may be used for such things as implementing LOCK operations or controlling +the lifetime of objects by decreasing their reference counts. In such cases +they need preceding memory barriers. + +The following may also be possible offenders as they may be used as UNLOCK +operations. + + set_bit(); + clear_bit(); + change_bit(); + atomic_set(); + + +The following are a little tricky: + + atomic_add(); + atomic_sub(); + atomic_inc(); + atomic_dec(); + +If they're used for statistics generation, then they probably don't need memory +barriers, unless there's a coupling between statistical data. + +If they're used for reference counting on an object to control its lifetime, +they probably don't need memory barriers because either the reference count +will be adjusted inside a locked section, or the caller will already hold +sufficient references to make the lock, and thus a memory barrier unnecessary. + +If they're used for constructing a lock of some description, then they probably +do need memory barriers as a lock primitive generally has to do things in a +specific order. + + +Basically, each usage case has to be carefully considered as to whether memory +barriers are needed or not. The simplest rule is probably: if the atomic +operation is protected by a lock, then it does not require a barrier unless +there's another operation within the critical section with respect to which an +ordering must be maintained. + +See Documentation/atomic_ops.txt for more information. + + +ACCESSING DEVICES +----------------- + +Many devices can be memory mapped, and so appear to the CPU as if they're just +a set of memory locations. To control such a device, the driver usually has to +make the right memory accesses in exactly the right order. + +However, having a clever CPU or a clever compiler creates a potential problem +in that the carefully sequenced accesses in the driver code won't reach the +device in the requisite order if the CPU or the compiler thinks it is more +efficient to reorder, combine or merge accesses - something that would cause +the device to malfunction. + +Inside of the Linux kernel, I/O should be done through the appropriate accessor +routines - such as inb() or writel() - which know how to make such accesses +appropriately sequential. Whilst this, for the most part, renders the explicit +use of memory barriers unnecessary, there are a couple of situations where they +might be needed: + + (1) On some systems, I/O stores are not strongly ordered across all CPUs, and + so for _all_ general drivers locks should be used and mmiowb() must be + issued prior to unlocking the critical section. + + (2) If the accessor functions are used to refer to an I/O memory window with + relaxed memory access properties, then _mandatory_ memory barriers are + required to enforce ordering. + +See Documentation/DocBook/deviceiobook.tmpl for more information. + + +INTERRUPTS +---------- + +A driver may be interrupted by its own interrupt service routine, and thus the +two parts of the driver may interfere with each other's attempts to control or +access the device. + +This may be alleviated - at least in part - by disabling local interrupts (a +form of locking), such that the critical operations are all contained within +the interrupt-disabled section in the driver. Whilst the driver's interrupt +routine is executing, the driver's core may not run on the same CPU, and its +interrupt is not permitted to happen again until the current interrupt has been +handled, thus the interrupt handler does not need to lock against that. + +However, consider a driver that was talking to an ethernet card that sports an +address register and a data register. If that driver's core talks to the card +under interrupt-disablement and then the driver's interrupt handler is invoked: + + LOCAL IRQ DISABLE + writew(ADDR, 3); + writew(DATA, y); + LOCAL IRQ ENABLE + + writew(ADDR, 4); + q = readw(DATA); + + +The store to the data register might happen after the second store to the +address register if ordering rules are sufficiently relaxed: + + STORE *ADDR = 3, STORE *ADDR = 4, STORE *DATA = y, q = LOAD *DATA + + +If ordering rules are relaxed, it must be assumed that accesses done inside an +interrupt disabled section may leak outside of it and may interleave with +accesses performed in an interrupt - and vice versa - unless implicit or +explicit barriers are used. + +Normally this won't be a problem because the I/O accesses done inside such +sections will include synchronous load operations on strictly ordered I/O +registers that form implicit I/O barriers. If this isn't sufficient then an +mmiowb() may need to be used explicitly. + + +A similar situation may occur between an interrupt routine and two routines +running on separate CPUs that communicate with each other. If such a case is +likely, then interrupt-disabling locks should be used to guarantee ordering. + + +========================== +KERNEL I/O BARRIER EFFECTS +========================== + +When accessing I/O memory, drivers should use the appropriate accessor +functions: + + (*) inX(), outX(): + + These are intended to talk to I/O space rather than memory space, but + that's primarily a CPU-specific concept. The i386 and x86_64 processors do + indeed have special I/O space access cycles and instructions, but many + CPUs don't have such a concept. + + The PCI bus, amongst others, defines an I/O space concept - which on such + CPUs as i386 and x86_64 cpus readily maps to the CPU's concept of I/O + space. However, it may also mapped as a virtual I/O space in the CPU's + memory map, particularly on those CPUs that don't support alternate + I/O spaces. + + Accesses to this space may be fully synchronous (as on i386), but + intermediary bridges (such as the PCI host bridge) may not fully honour + that. + + They are guaranteed to be fully ordered with respect to each other. + + They are not guaranteed to be fully ordered with respect to other types of + memory and I/O operation. + + (*) readX(), writeX(): + + Whether these are guaranteed to be fully ordered and uncombined with + respect to each other on the issuing CPU depends on the characteristics + defined for the memory window through which they're accessing. On later + i386 architecture machines, for example, this is controlled by way of the + MTRR registers. + + Ordinarily, these will be guaranteed to be fully ordered and uncombined,, + provided they're not accessing a prefetchable device. + + However, intermediary hardware (such as a PCI bridge) may indulge in + deferral if it so wishes; to flush a store, a load from the same location + is preferred[*], but a load from the same device or from configuration + space should suffice for PCI. + + [*] NOTE! attempting to load from the same location as was written to may + cause a malfunction - consider the 16550 Rx/Tx serial registers for + example. + + Used with prefetchable I/O memory, an mmiowb() barrier may be required to + force stores to be ordered. + + Please refer to the PCI specification for more information on interactions + between PCI transactions. + + (*) readX_relaxed() + + These are similar to readX(), but are not guaranteed to be ordered in any + way. Be aware that there is no I/O read barrier available. + + (*) ioreadX(), iowriteX() + + These will perform as appropriate for the type of access they're actually + doing, be it inX()/outX() or readX()/writeX(). + + +======================================== +ASSUMED MINIMUM EXECUTION ORDERING MODEL +======================================== + +It has to be assumed that the conceptual CPU is weakly-ordered but that it will +maintain the appearance of program causality with respect to itself. Some CPUs +(such as i386 or x86_64) are more constrained than others (such as powerpc or +frv), and so the most relaxed case (namely DEC Alpha) must be assumed outside +of arch-specific code. + +This means that it must be considered that the CPU will execute its instruction +stream in any order it feels like - or even in parallel - provided that if an +instruction in the stream depends on the an earlier instruction, then that +earlier instruction must be sufficiently complete[*] before the later +instruction may proceed; in other words: provided that the appearance of +causality is maintained. + + [*] Some instructions have more than one effect - such as changing the + condition codes, changing registers or changing memory - and different + instructions may depend on different effects. + +A CPU may also discard any instruction sequence that winds up having no +ultimate effect. For example, if two adjacent instructions both load an +immediate value into the same register, the first may be discarded. + + +Similarly, it has to be assumed that compiler might reorder the instruction +stream in any way it sees fit, again provided the appearance of causality is +maintained. + + +============================ +THE EFFECTS OF THE CPU CACHE +============================ + +The way cached memory operations are perceived across the system is affected to +a certain extent by the caches that lie between CPUs and memory, and by the +memory coherence system that maintains the consistency of state in the system. + +As far as the way a CPU interacts with another part of the system through the +caches goes, the memory system has to include the CPU's caches, and memory +barriers for the most part act at the interface between the CPU and its cache +(memory barriers logically act on the dotted line in the following diagram): + + <--- CPU ---> : <----------- Memory -----------> + : + +--------+ +--------+ : +--------+ +-----------+ + | | | | : | | | | +--------+ + | CPU | | Memory | : | CPU | | | | | + | Core |--->| Access |----->| Cache |<-->| | | | + | | | Queue | : | | | |--->| Memory | + | | | | : | | | | | | + +--------+ +--------+ : +--------+ | | | | + : | Cache | +--------+ + : | Coherency | + : | Mechanism | +--------+ + +--------+ +--------+ : +--------+ | | | | + | | | | : | | | | | | + | CPU | | Memory | : | CPU | | |--->| Device | + | Core |--->| Access |----->| Cache |<-->| | | | + | | | Queue | : | | | | | | + | | | | : | | | | +--------+ + +--------+ +--------+ : +--------+ +-----------+ + : + : + +Although any particular load or store may not actually appear outside of the +CPU that issued it since it may have been satisfied within the CPU's own cache, +it will still appear as if the full memory access had taken place as far as the +other CPUs are concerned since the cache coherency mechanisms will migrate the +cacheline over to the accessing CPU and propagate the effects upon conflict. + +The CPU core may execute instructions in any order it deems fit, provided the +expected program causality appears to be maintained. Some of the instructions +generate load and store operations which then go into the queue of memory +accesses to be performed. The core may place these in the queue in any order +it wishes, and continue execution until it is forced to wait for an instruction +to complete. + +What memory barriers are concerned with is controlling the order in which +accesses cross from the CPU side of things to the memory side of things, and +the order in which the effects are perceived to happen by the other observers +in the system. + +[!] Memory barriers are _not_ needed within a given CPU, as CPUs always see +their own loads and stores as if they had happened in program order. + +[!] MMIO or other device accesses may bypass the cache system. This depends on +the properties of the memory window through which devices are accessed and/or +the use of any special device communication instructions the CPU may have. + + +CACHE COHERENCY +--------------- + +Life isn't quite as simple as it may appear above, however: for while the +caches are expected to be coherent, there's no guarantee that that coherency +will be ordered. This means that whilst changes made on one CPU will +eventually become visible on all CPUs, there's no guarantee that they will +become apparent in the same order on those other CPUs. + + +Consider dealing with a system that has pair of CPUs (1 & 2), each of which has +a pair of parallel data caches (CPU 1 has A/B, and CPU 2 has C/D): + + : + : +--------+ + : +---------+ | | + +--------+ : +--->| Cache A |<------->| | + | | : | +---------+ | | + | CPU 1 |<---+ | | + | | : | +---------+ | | + +--------+ : +--->| Cache B |<------->| | + : +---------+ | | + : | Memory | + : +---------+ | System | + +--------+ : +--->| Cache C |<------->| | + | | : | +---------+ | | + | CPU 2 |<---+ | | + | | : | +---------+ | | + +--------+ : +--->| Cache D |<------->| | + : +---------+ | | + : +--------+ + : + +Imagine the system has the following properties: + + (*) an odd-numbered cache line may be in cache A, cache C or it may still be + resident in memory; + + (*) an even-numbered cache line may be in cache B, cache D or it may still be + resident in memory; + + (*) whilst the CPU core is interrogating one cache, the other cache may be + making use of the bus to access the rest of the system - perhaps to + displace a dirty cacheline or to do a speculative load; + + (*) each cache has a queue of operations that need to be applied to that cache + to maintain coherency with the rest of the system; + + (*) the coherency queue is not flushed by normal loads to lines already + present in the cache, even though the contents of the queue may + potentially effect those loads. + +Imagine, then, that two writes are made on the first CPU, with a write barrier +between them to guarantee that they will appear to reach that CPU's caches in +the requisite order: + + CPU 1 CPU 2 COMMENT + =============== =============== ======================================= + u == 0, v == 1 and p == &u, q == &u + v = 2; + smp_wmb(); Make sure change to v visible before + change to p + v is now in cache A exclusively + p = &v; + p is now in cache B exclusively + +The write memory barrier forces the other CPUs in the system to perceive that +the local CPU's caches have apparently been updated in the correct order. But +now imagine that the second CPU that wants to read those values: + + CPU 1 CPU 2 COMMENT + =============== =============== ======================================= + ... + q = p; + x = *q; + +The above pair of reads may then fail to happen in expected order, as the +cacheline holding p may get updated in one of the second CPU's caches whilst +the update to the cacheline holding v is delayed in the other of the second +CPU's caches by some other cache event: + + CPU 1 CPU 2 COMMENT + =============== =============== ======================================= + u == 0, v == 1 and p == &u, q == &u + v = 2; + smp_wmb(); + + + p = &b; q = p; + + + + x = *q; + Reads from v before v updated in cache + + + +Basically, whilst both cachelines will be updated on CPU 2 eventually, there's +no guarantee that, without intervention, the order of update will be the same +as that committed on CPU 1. + + +To intervene, we need to interpolate a data dependency barrier or a read +barrier between the loads. This will force the cache to commit its coherency +queue before processing any further requests: + + CPU 1 CPU 2 COMMENT + =============== =============== ======================================= + u == 0, v == 1 and p == &u, q == &u + v = 2; + smp_wmb(); + + + p = &b; q = p; + + + + smp_read_barrier_depends() + + + x = *q; + Reads from v after v updated in cache + + +This sort of problem can be encountered on DEC Alpha processors as they have a +split cache that improves performance by making better use of the data bus. +Whilst most CPUs do imply a data dependency barrier on the read when a memory +access depends on a read, not all do, so it may not be relied on. + +Other CPUs may also have split caches, but must coordinate between the various +cachelets for normal memory accesss. The semantics of the Alpha removes the +need for coordination in absence of memory barriers. + + +CACHE COHERENCY VS DMA +---------------------- + +Not all systems maintain cache coherency with respect to devices doing DMA. In +such cases, a device attempting DMA may obtain stale data from RAM because +dirty cache lines may be resident in the caches of various CPUs, and may not +have been written back to RAM yet. To deal with this, the appropriate part of +the kernel must flush the overlapping bits of cache on each CPU (and maybe +invalidate them as well). + +In addition, the data DMA'd to RAM by a device may be overwritten by dirty +cache lines being written back to RAM from a CPU's cache after the device has +installed its own data, or cache lines simply present in a CPUs cache may +simply obscure the fact that RAM has been updated, until at such time as the +cacheline is discarded from the CPU's cache and reloaded. To deal with this, +the appropriate part of the kernel must invalidate the overlapping bits of the +cache on each CPU. + +See Documentation/cachetlb.txt for more information on cache management. + + +CACHE COHERENCY VS MMIO +----------------------- + +Memory mapped I/O usually takes place through memory locations that are part of +a window in the CPU's memory space that have different properties assigned than +the usual RAM directed window. + +Amongst these properties is usually the fact that such accesses bypass the +caching entirely and go directly to the device buses. This means MMIO accesses +may, in effect, overtake accesses to cached memory that were emitted earlier. +A memory barrier isn't sufficient in such a case, but rather the cache must be +flushed between the cached memory write and the MMIO access if the two are in +any way dependent. + + +========================= +THE THINGS CPUS GET UP TO +========================= + +A programmer might take it for granted that the CPU will perform memory +operations in exactly the order specified, so that if a CPU is, for example, +given the following piece of code to execute: + + a = *A; + *B = b; + c = *C; + d = *D; + *E = e; + +They would then expect that the CPU will complete the memory operation for each +instruction before moving on to the next one, leading to a definite sequence of +operations as seen by external observers in the system: + + LOAD *A, STORE *B, LOAD *C, LOAD *D, STORE *E. + + +Reality is, of course, much messier. With many CPUs and compilers, the above +assumption doesn't hold because: + + (*) loads are more likely to need to be completed immediately to permit + execution progress, whereas stores can often be deferred without a + problem; + + (*) loads may be done speculatively, and the result discarded should it prove + to have been unnecessary; + + (*) loads may be done speculatively, leading to the result having being + fetched at the wrong time in the expected sequence of events; + + (*) the order of the memory accesses may be rearranged to promote better use + of the CPU buses and caches; + + (*) loads and stores may be combined to improve performance when talking to + memory or I/O hardware that can do batched accesses of adjacent locations, + thus cutting down on transaction setup costs (memory and PCI devices may + both be able to do this); and + + (*) the CPU's data cache may affect the ordering, and whilst cache-coherency + mechanisms may alleviate this - once the store has actually hit the cache + - there's no guarantee that the coherency management will be propagated in + order to other CPUs. + +So what another CPU, say, might actually observe from the above piece of code +is: + + LOAD *A, ..., LOAD {*C,*D}, STORE *E, STORE *B + + (Where "LOAD {*C,*D}" is a combined load) + + +However, it is guaranteed that a CPU will be self-consistent: it will see its +_own_ accesses appear to be correctly ordered, without the need for a memory +barrier. For instance with the following code: + + U = *A; + *A = V; + *A = W; + X = *A; + *A = Y; + Z = *A; + +and assuming no intervention by an external influence, it can be assumed that +the final result will appear to be: + + U == the original value of *A + X == W + Z == Y + *A == Y + +The code above may cause the CPU to generate the full sequence of memory +accesses: + + U=LOAD *A, STORE *A=V, STORE *A=W, X=LOAD *A, STORE *A=Y, Z=LOAD *A + +in that order, but, without intervention, the sequence may have almost any +combination of elements combined or discarded, provided the program's view of +the world remains consistent. + +The compiler may also combine, discard or defer elements of the sequence before +the CPU even sees them. + +For instance: + + *A = V; + *A = W; + +may be reduced to: + + *A = W; + +since, without a write barrier, it can be assumed that the effect of the +storage of V to *A is lost. Similarly: + + *A = Y; + Z = *A; + +may, without a memory barrier, be reduced to: + + *A = Y; + Z = Y; + +and the LOAD operation never appear outside of the CPU. + + +AND THEN THERE'S THE ALPHA +-------------------------- + +The DEC Alpha CPU is one of the most relaxed CPUs there is. Not only that, +some versions of the Alpha CPU have a split data cache, permitting them to have +two semantically related cache lines updating at separate times. This is where +the data dependency barrier really becomes necessary as this synchronises both +caches with the memory coherence system, thus making it seem like pointer +changes vs new data occur in the right order. + +The Alpha defines the Linux's kernel's memory barrier model. + +See the subsection on "Cache Coherency" above. + + +========== +REFERENCES +========== + +Alpha AXP Architecture Reference Manual, Second Edition (Sites & Witek, +Digital Press) + Chapter 5.2: Physical Address Space Characteristics + Chapter 5.4: Caches and Write Buffers + Chapter 5.5: Data Sharing + Chapter 5.6: Read/Write Ordering + +AMD64 Architecture Programmer's Manual Volume 2: System Programming + Chapter 7.1: Memory-Access Ordering + Chapter 7.4: Buffering and Combining Memory Writes + +IA-32 Intel Architecture Software Developer's Manual, Volume 3: +System Programming Guide + Chapter 7.1: Locked Atomic Operations + Chapter 7.2: Memory Ordering + Chapter 7.4: Serializing Instructions + +The SPARC Architecture Manual, Version 9 + Chapter 8: Memory Models + Appendix D: Formal Specification of the Memory Models + Appendix J: Programming with the Memory Models + +UltraSPARC Programmer Reference Manual + Chapter 5: Memory Accesses and Cacheability + Chapter 15: Sparc-V9 Memory Models + +UltraSPARC III Cu User's Manual + Chapter 9: Memory Models + +UltraSPARC IIIi Processor User's Manual + Chapter 8: Memory Models + +UltraSPARC Architecture 2005 + Chapter 9: Memory + Appendix D: Formal Specifications of the Memory Models + +UltraSPARC T1 Supplement to the UltraSPARC Architecture 2005 + Chapter 8: Memory Models + Appendix F: Caches and Cache Coherency + +Solaris Internals, Core Kernel Architecture, p63-68: + Chapter 3.3: Hardware Considerations for Locks and + Synchronization + +Unix Systems for Modern Architectures, Symmetric Multiprocessing and Caching +for Kernel Programmers: + Chapter 13: Other Memory Models + +Intel Itanium Architecture Software Developer's Manual: Volume 1: + Section 2.6: Speculation + Section 4.4: Memory Access diff --git a/trunk/arch/alpha/kernel/alpha_ksyms.c b/trunk/arch/alpha/kernel/alpha_ksyms.c index 1898ea79d0e2..9d6186d50245 100644 --- a/trunk/arch/alpha/kernel/alpha_ksyms.c +++ b/trunk/arch/alpha/kernel/alpha_ksyms.c @@ -216,8 +216,6 @@ EXPORT_SYMBOL(memcpy); EXPORT_SYMBOL(memset); EXPORT_SYMBOL(memchr); -EXPORT_SYMBOL(get_wchan); - #ifdef CONFIG_ALPHA_IRONGATE EXPORT_SYMBOL(irongate_ioremap); EXPORT_SYMBOL(irongate_iounmap); diff --git a/trunk/arch/alpha/kernel/core_marvel.c b/trunk/arch/alpha/kernel/core_marvel.c index 44866cb26a80..7f6a98455e74 100644 --- a/trunk/arch/alpha/kernel/core_marvel.c +++ b/trunk/arch/alpha/kernel/core_marvel.c @@ -435,7 +435,7 @@ marvel_specify_io7(char *str) str = pchar; } while(*str); - return 0; + return 1; } __setup("io7=", marvel_specify_io7); diff --git a/trunk/arch/alpha/kernel/setup.c b/trunk/arch/alpha/kernel/setup.c index dd8769670596..a15e18a00258 100644 --- a/trunk/arch/alpha/kernel/setup.c +++ b/trunk/arch/alpha/kernel/setup.c @@ -28,6 +28,7 @@ #include #include #include +#include #include #include #include @@ -1478,3 +1479,20 @@ alpha_panic_event(struct notifier_block *this, unsigned long event, void *ptr) #endif return NOTIFY_DONE; } + +static __init int add_pcspkr(void) +{ + struct platform_device *pd; + int ret; + + pd = platform_device_alloc("pcspkr", -1); + if (!pd) + return -ENOMEM; + + ret = platform_device_add(pd); + if (ret) + platform_device_put(pd); + + return ret; +} +device_initcall(add_pcspkr); diff --git a/trunk/arch/arm/Kconfig b/trunk/arch/arm/Kconfig index ba46d779ede7..dc5a9332c915 100644 --- a/trunk/arch/arm/Kconfig +++ b/trunk/arch/arm/Kconfig @@ -77,6 +77,14 @@ config FIQ config ARCH_MTD_XIP bool +config VECTORS_BASE + hex + default 0xffff0000 if MMU + default DRAM_BASE if REMAP_VECTORS_TO_RAM + default 0x00000000 + help + The base address of exception vectors. + source "init/Kconfig" menu "System Type" @@ -839,6 +847,8 @@ source "drivers/misc/Kconfig" source "drivers/mfd/Kconfig" +source "drivers/leds/Kconfig" + source "drivers/media/Kconfig" source "drivers/video/Kconfig" diff --git a/trunk/arch/arm/Kconfig-nommu b/trunk/arch/arm/Kconfig-nommu new file mode 100644 index 000000000000..e1574be2ded6 --- /dev/null +++ b/trunk/arch/arm/Kconfig-nommu @@ -0,0 +1,44 @@ +# +# Kconfig for uClinux(non-paged MM) depend configurations +# Hyok S. Choi +# + +config SET_MEM_PARAM + bool "Set flash/sdram size and base addr" + help + Say Y to manually set the base addresses and sizes. + otherwise, the default values are assigned. + +config DRAM_BASE + hex '(S)DRAM Base Address' if SET_MEM_PARAM + default 0x00800000 + +config DRAM_SIZE + hex '(S)DRAM SIZE' if SET_MEM_PARAM + default 0x00800000 + +config FLASH_MEM_BASE + hex 'FLASH Base Address' if SET_MEM_PARAM + default 0x00400000 + +config FLASH_SIZE + hex 'FLASH Size' if SET_MEM_PARAM + default 0x00400000 + +config REMAP_VECTORS_TO_RAM + bool 'Install vectors to the begining of RAM' if DRAM_BASE + depends on DRAM_BASE + help + The kernel needs to change the hardware exception vectors. + In nommu mode, the hardware exception vectors are normally + placed at address 0x00000000. However, this region may be + occupied by read-only memory depending on H/W design. + + If the region contains read-write memory, say 'n' here. + + If your CPU provides a remap facility which allows the exception + vectors to be mapped to writable memory, say 'n' here. + + Otherwise, say 'y' here. In this case, the kernel will require + external support to redirect the hardware exception vectors to + the writable versions located at DRAM_BASE. diff --git a/trunk/arch/arm/Makefile b/trunk/arch/arm/Makefile index ce3e804ea0f3..95a96275f88a 100644 --- a/trunk/arch/arm/Makefile +++ b/trunk/arch/arm/Makefile @@ -20,6 +20,11 @@ GZFLAGS :=-9 # Select a platform tht is kept up-to-date KBUILD_DEFCONFIG := versatile_defconfig +# defines filename extension depending memory manement type. +ifeq ($(CONFIG_MMU),) +MMUEXT := -nommu +endif + ifeq ($(CONFIG_FRAME_POINTER),y) CFLAGS +=-fno-omit-frame-pointer -mapcs -mno-sched-prolog endif @@ -73,7 +78,7 @@ AFLAGS +=$(CFLAGS_ABI) $(arch-y) $(tune-y) -msoft-float CHECKFLAGS += -D__arm__ #Default value -head-y := arch/arm/kernel/head.o arch/arm/kernel/init_task.o +head-y := arch/arm/kernel/head$(MMUEXT).o arch/arm/kernel/init_task.o textofs-y := 0x00008000 machine-$(CONFIG_ARCH_RPC) := rpc @@ -133,7 +138,7 @@ else MACHINE := endif -export TEXT_OFFSET GZFLAGS +export TEXT_OFFSET GZFLAGS MMUEXT # Do we have FASTFPE? FASTFPE :=arch/arm/fastfpe diff --git a/trunk/arch/arm/boot/compressed/head.S b/trunk/arch/arm/boot/compressed/head.S index 491c7e4c9ac6..b56f5e691d65 100644 --- a/trunk/arch/arm/boot/compressed/head.S +++ b/trunk/arch/arm/boot/compressed/head.S @@ -2,6 +2,7 @@ * linux/arch/arm/boot/compressed/head.S * * Copyright (C) 1996-2002 Russell King + * Copyright (C) 2004 Hyok S. Choi (MPU support) * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as @@ -320,6 +321,62 @@ params: ldr r0, =params_phys cache_on: mov r3, #8 @ cache_on function b call_cache_fn +/* + * Initialize the highest priority protection region, PR7 + * to cover all 32bit address and cacheable and bufferable. + */ +__armv4_mpu_cache_on: + mov r0, #0x3f @ 4G, the whole + mcr p15, 0, r0, c6, c7, 0 @ PR7 Area Setting + mcr p15, 0, r0, c6, c7, 1 + + mov r0, #0x80 @ PR7 + mcr p15, 0, r0, c2, c0, 0 @ D-cache on + mcr p15, 0, r0, c2, c0, 1 @ I-cache on + mcr p15, 0, r0, c3, c0, 0 @ write-buffer on + + mov r0, #0xc000 + mcr p15, 0, r0, c5, c0, 1 @ I-access permission + mcr p15, 0, r0, c5, c0, 0 @ D-access permission + + mov r0, #0 + mcr p15, 0, r0, c7, c10, 4 @ drain write buffer + mcr p15, 0, r0, c7, c5, 0 @ flush(inval) I-Cache + mcr p15, 0, r0, c7, c6, 0 @ flush(inval) D-Cache + mrc p15, 0, r0, c1, c0, 0 @ read control reg + @ ...I .... ..D. WC.M + orr r0, r0, #0x002d @ .... .... ..1. 11.1 + orr r0, r0, #0x1000 @ ...1 .... .... .... + + mcr p15, 0, r0, c1, c0, 0 @ write control reg + + mov r0, #0 + mcr p15, 0, r0, c7, c5, 0 @ flush(inval) I-Cache + mcr p15, 0, r0, c7, c6, 0 @ flush(inval) D-Cache + mov pc, lr + +__armv3_mpu_cache_on: + mov r0, #0x3f @ 4G, the whole + mcr p15, 0, r0, c6, c7, 0 @ PR7 Area Setting + + mov r0, #0x80 @ PR7 + mcr p15, 0, r0, c2, c0, 0 @ cache on + mcr p15, 0, r0, c3, c0, 0 @ write-buffer on + + mov r0, #0xc000 + mcr p15, 0, r0, c5, c0, 0 @ access permission + + mov r0, #0 + mcr p15, 0, r0, c7, c0, 0 @ invalidate whole cache v3 + mrc p15, 0, r0, c1, c0, 0 @ read control reg + @ .... .... .... WC.M + orr r0, r0, #0x000d @ .... .... .... 11.1 + mov r0, #0 + mcr p15, 0, r0, c1, c0, 0 @ write control reg + + mcr p15, 0, r0, c7, c0, 0 @ invalidate whole cache v3 + mov pc, lr + __setup_mmu: sub r3, r4, #16384 @ Page directory size bic r3, r3, #0xff @ Align the pointer bic r3, r3, #0x3f00 @@ -496,6 +553,18 @@ proc_types: b __armv4_mmu_cache_off mov pc, lr + .word 0x41007400 @ ARM74x + .word 0xff00ff00 + b __armv3_mpu_cache_on + b __armv3_mpu_cache_off + b __armv3_mpu_cache_flush + + .word 0x41009400 @ ARM94x + .word 0xff00ff00 + b __armv4_mpu_cache_on + b __armv4_mpu_cache_off + b __armv4_mpu_cache_flush + .word 0x00007000 @ ARM7 IDs .word 0x0000f000 mov pc, lr @@ -562,6 +631,24 @@ proc_types: cache_off: mov r3, #12 @ cache_off function b call_cache_fn +__armv4_mpu_cache_off: + mrc p15, 0, r0, c1, c0 + bic r0, r0, #0x000d + mcr p15, 0, r0, c1, c0 @ turn MPU and cache off + mov r0, #0 + mcr p15, 0, r0, c7, c10, 4 @ drain write buffer + mcr p15, 0, r0, c7, c6, 0 @ flush D-Cache + mcr p15, 0, r0, c7, c5, 0 @ flush I-Cache + mov pc, lr + +__armv3_mpu_cache_off: + mrc p15, 0, r0, c1, c0 + bic r0, r0, #0x000d + mcr p15, 0, r0, c1, c0, 0 @ turn MPU and cache off + mov r0, #0 + mcr p15, 0, r0, c7, c0, 0 @ invalidate whole cache v3 + mov pc, lr + __armv4_mmu_cache_off: mrc p15, 0, r0, c1, c0 bic r0, r0, #0x000d @@ -601,6 +688,24 @@ cache_clean_flush: mov r3, #16 b call_cache_fn +__armv4_mpu_cache_flush: + mov r2, #1 + mov r3, #0 + mcr p15, 0, ip, c7, c6, 0 @ invalidate D cache + mov r1, #7 << 5 @ 8 segments +1: orr r3, r1, #63 << 26 @ 64 entries +2: mcr p15, 0, r3, c7, c14, 2 @ clean & invalidate D index + subs r3, r3, #1 << 26 + bcs 2b @ entries 63 to 0 + subs r1, r1, #1 << 5 + bcs 1b @ segments 7 to 0 + + teq r2, #0 + mcrne p15, 0, ip, c7, c5, 0 @ invalidate I cache + mcr p15, 0, ip, c7, c10, 4 @ drain WB + mov pc, lr + + __armv6_mmu_cache_flush: mov r1, #0 mcr p15, 0, r1, c7, c14, 0 @ clean+invalidate D @@ -638,6 +743,7 @@ no_cache_id: mov pc, lr __armv3_mmu_cache_flush: +__armv3_mpu_cache_flush: mov r1, #0 mcr p15, 0, r0, c7, c0, 0 @ invalidate whole cache v3 mov pc, lr diff --git a/trunk/arch/arm/common/sharpsl_pm.c b/trunk/arch/arm/common/sharpsl_pm.c index 978d32e82d39..3cd8c9ee4510 100644 --- a/trunk/arch/arm/common/sharpsl_pm.c +++ b/trunk/arch/arm/common/sharpsl_pm.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include @@ -75,6 +76,7 @@ static void sharpsl_battery_thread(void *private_); struct sharpsl_pm_status sharpsl_pm; DECLARE_WORK(toggle_charger, sharpsl_charge_toggle, NULL); DECLARE_WORK(sharpsl_bat, sharpsl_battery_thread, NULL); +DEFINE_LED_TRIGGER(sharpsl_charge_led_trigger); static int get_percentage(int voltage) @@ -190,10 +192,10 @@ void sharpsl_pm_led(int val) dev_err(sharpsl_pm.dev, "Charging Error!\n"); } else if (val == SHARPSL_LED_ON) { dev_dbg(sharpsl_pm.dev, "Charge LED On\n"); - + led_trigger_event(sharpsl_charge_led_trigger, LED_FULL); } else { dev_dbg(sharpsl_pm.dev, "Charge LED Off\n"); - + led_trigger_event(sharpsl_charge_led_trigger, LED_OFF); } } @@ -786,6 +788,8 @@ static int __init sharpsl_pm_probe(struct platform_device *pdev) init_timer(&sharpsl_pm.chrg_full_timer); sharpsl_pm.chrg_full_timer.function = sharpsl_chrg_full_timer; + led_trigger_register_simple("sharpsl-charge", &sharpsl_charge_led_trigger); + sharpsl_pm.machinfo->init(); device_create_file(&pdev->dev, &dev_attr_battery_percentage); @@ -807,6 +811,8 @@ static int sharpsl_pm_remove(struct platform_device *pdev) device_remove_file(&pdev->dev, &dev_attr_battery_percentage); device_remove_file(&pdev->dev, &dev_attr_battery_voltage); + led_trigger_unregister_simple(sharpsl_charge_led_trigger); + sharpsl_pm.machinfo->exit(); del_timer_sync(&sharpsl_pm.chrg_full_timer); diff --git a/trunk/arch/arm/kernel/entry-armv.S b/trunk/arch/arm/kernel/entry-armv.S index 355914ffb192..ab8e600c18c8 100644 --- a/trunk/arch/arm/kernel/entry-armv.S +++ b/trunk/arch/arm/kernel/entry-armv.S @@ -666,7 +666,7 @@ __kuser_helper_start: * * #define __kernel_dmb() \ * asm volatile ( "mov r0, #0xffff0fff; mov lr, pc; sub pc, r0, #95" \ - * : : : "lr","cc" ) + * : : : "r0", "lr","cc" ) */ __kuser_memory_barrier: @ 0xffff0fa0 diff --git a/trunk/arch/arm/kernel/head-common.S b/trunk/arch/arm/kernel/head-common.S new file mode 100644 index 000000000000..a52da0ddb43d --- /dev/null +++ b/trunk/arch/arm/kernel/head-common.S @@ -0,0 +1,217 @@ +/* + * linux/arch/arm/kernel/head-common.S + * + * Copyright (C) 1994-2002 Russell King + * Copyright (c) 2003 ARM Limited + * All Rights Reserved + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ + + .type __switch_data, %object +__switch_data: + .long __mmap_switched + .long __data_loc @ r4 + .long __data_start @ r5 + .long __bss_start @ r6 + .long _end @ r7 + .long processor_id @ r4 + .long __machine_arch_type @ r5 + .long cr_alignment @ r6 + .long init_thread_union + THREAD_START_SP @ sp + +/* + * The following fragment of code is executed with the MMU on in MMU mode, + * and uses absolute addresses; this is not position independent. + * + * r0 = cp#15 control register + * r1 = machine ID + * r9 = processor ID + */ + .type __mmap_switched, %function +__mmap_switched: + adr r3, __switch_data + 4 + + ldmia r3!, {r4, r5, r6, r7} + cmp r4, r5 @ Copy data segment if needed +1: cmpne r5, r6 + ldrne fp, [r4], #4 + strne fp, [r5], #4 + bne 1b + + mov fp, #0 @ Clear BSS (and zero fp) +1: cmp r6, r7 + strcc fp, [r6],#4 + bcc 1b + + ldmia r3, {r4, r5, r6, sp} + str r9, [r4] @ Save processor ID + str r1, [r5] @ Save machine type + bic r4, r0, #CR_A @ Clear 'A' bit + stmia r6, {r0, r4} @ Save control register values + b start_kernel + +/* + * Exception handling. Something went wrong and we can't proceed. We + * ought to tell the user, but since we don't have any guarantee that + * we're even running on the right architecture, we do virtually nothing. + * + * If CONFIG_DEBUG_LL is set we try to print out something about the error + * and hope for the best (useful if bootloader fails to pass a proper + * machine ID for example). + */ + + .type __error_p, %function +__error_p: +#ifdef CONFIG_DEBUG_LL + adr r0, str_p1 + bl printascii + b __error +str_p1: .asciz "\nError: unrecognized/unsupported processor variant.\n" + .align +#endif + + .type __error_a, %function +__error_a: +#ifdef CONFIG_DEBUG_LL + mov r4, r1 @ preserve machine ID + adr r0, str_a1 + bl printascii + mov r0, r4 + bl printhex8 + adr r0, str_a2 + bl printascii + adr r3, 3f + ldmia r3, {r4, r5, r6} @ get machine desc list + sub r4, r3, r4 @ get offset between virt&phys + add r5, r5, r4 @ convert virt addresses to + add r6, r6, r4 @ physical address space +1: ldr r0, [r5, #MACHINFO_TYPE] @ get machine type + bl printhex8 + mov r0, #'\t' + bl printch + ldr r0, [r5, #MACHINFO_NAME] @ get machine name + add r0, r0, r4 + bl printascii + mov r0, #'\n' + bl printch + add r5, r5, #SIZEOF_MACHINE_DESC @ next machine_desc + cmp r5, r6 + blo 1b + adr r0, str_a3 + bl printascii + b __error +str_a1: .asciz "\nError: unrecognized/unsupported machine ID (r1 = 0x" +str_a2: .asciz ").\n\nAvailable machine support:\n\nID (hex)\tNAME\n" +str_a3: .asciz "\nPlease check your kernel config and/or bootloader.\n" + .align +#endif + + .type __error, %function +__error: +#ifdef CONFIG_ARCH_RPC +/* + * Turn the screen red on a error - RiscPC only. + */ + mov r0, #0x02000000 + mov r3, #0x11 + orr r3, r3, r3, lsl #8 + orr r3, r3, r3, lsl #16 + str r3, [r0], #4 + str r3, [r0], #4 + str r3, [r0], #4 + str r3, [r0], #4 +#endif +1: mov r0, r0 + b 1b + + +/* + * Read processor ID register (CP#15, CR0), and look up in the linker-built + * supported processor list. Note that we can't use the absolute addresses + * for the __proc_info lists since we aren't running with the MMU on + * (and therefore, we are not in the correct address space). We have to + * calculate the offset. + * + * r9 = cpuid + * Returns: + * r3, r4, r6 corrupted + * r5 = proc_info pointer in physical address space + * r9 = cpuid (preserved) + */ + .type __lookup_processor_type, %function +__lookup_processor_type: + adr r3, 3f + ldmda r3, {r5 - r7} + sub r3, r3, r7 @ get offset between virt&phys + add r5, r5, r3 @ convert virt addresses to + add r6, r6, r3 @ physical address space +1: ldmia r5, {r3, r4} @ value, mask + and r4, r4, r9 @ mask wanted bits + teq r3, r4 + beq 2f + add r5, r5, #PROC_INFO_SZ @ sizeof(proc_info_list) + cmp r5, r6 + blo 1b + mov r5, #0 @ unknown processor +2: mov pc, lr + +/* + * This provides a C-API version of the above function. + */ +ENTRY(lookup_processor_type) + stmfd sp!, {r4 - r7, r9, lr} + mov r9, r0 + bl __lookup_processor_type + mov r0, r5 + ldmfd sp!, {r4 - r7, r9, pc} + +/* + * Look in include/asm-arm/procinfo.h and arch/arm/kernel/arch.[ch] for + * more information about the __proc_info and __arch_info structures. + */ + .long __proc_info_begin + .long __proc_info_end +3: .long . + .long __arch_info_begin + .long __arch_info_end + +/* + * Lookup machine architecture in the linker-build list of architectures. + * Note that we can't use the absolute addresses for the __arch_info + * lists since we aren't running with the MMU on (and therefore, we are + * not in the correct address space). We have to calculate the offset. + * + * r1 = machine architecture number + * Returns: + * r3, r4, r6 corrupted + * r5 = mach_info pointer in physical address space + */ + .type __lookup_machine_type, %function +__lookup_machine_type: + adr r3, 3b + ldmia r3, {r4, r5, r6} + sub r3, r3, r4 @ get offset between virt&phys + add r5, r5, r3 @ convert virt addresses to + add r6, r6, r3 @ physical address space +1: ldr r3, [r5, #MACHINFO_TYPE] @ get machine type + teq r3, r1 @ matches loader number? + beq 2f @ found + add r5, r5, #SIZEOF_MACHINE_DESC @ next machine_desc + cmp r5, r6 + blo 1b + mov r5, #0 @ unknown machine +2: mov pc, lr + +/* + * This provides a C-API version of the above function. + */ +ENTRY(lookup_machine_type) + stmfd sp!, {r4 - r6, lr} + mov r1, r0 + bl __lookup_machine_type + mov r0, r5 + ldmfd sp!, {r4 - r6, pc} diff --git a/trunk/arch/arm/kernel/head-nommu.S b/trunk/arch/arm/kernel/head-nommu.S new file mode 100644 index 000000000000..b093ab8738b5 --- /dev/null +++ b/trunk/arch/arm/kernel/head-nommu.S @@ -0,0 +1,83 @@ +/* + * linux/arch/arm/kernel/head-nommu.S + * + * Copyright (C) 1994-2002 Russell King + * Copyright (C) 2003-2006 Hyok S. Choi + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * Common kernel startup code (non-paged MM) + * for 32-bit CPUs which has a process ID register(CP15). + * + */ +#include +#include +#include + +#include +#include +#include +#include +#include +#include + +#define PROCINFO_INITFUNC 12 + +/* + * Kernel startup entry point. + * --------------------------- + * + * This is normally called from the decompressor code. The requirements + * are: MMU = off, D-cache = off, I-cache = dont care, r0 = 0, + * r1 = machine nr. + * + * See linux/arch/arm/tools/mach-types for the complete list of machine + * numbers for r1. + * + */ + __INIT + .type stext, %function +ENTRY(stext) + msr cpsr_c, #PSR_F_BIT | PSR_I_BIT | MODE_SVC @ ensure svc mode + @ and irqs disabled + mrc p15, 0, r9, c0, c0 @ get processor id + bl __lookup_processor_type @ r5=procinfo r9=cpuid + movs r10, r5 @ invalid processor (r5=0)? + beq __error_p @ yes, error 'p' + bl __lookup_machine_type @ r5=machinfo + movs r8, r5 @ invalid machine (r5=0)? + beq __error_a @ yes, error 'a' + + ldr r13, __switch_data @ address to jump to after + @ the initialization is done + adr lr, __after_proc_init @ return (PIC) address + add pc, r10, #PROCINFO_INITFUNC + +/* + * Set the Control Register and Read the process ID. + */ + .type __after_proc_init, %function +__after_proc_init: + mrc p15, 0, r0, c1, c0, 0 @ read control reg +#ifdef CONFIG_ALIGNMENT_TRAP + orr r0, r0, #CR_A +#else + bic r0, r0, #CR_A +#endif +#ifdef CONFIG_CPU_DCACHE_DISABLE + bic r0, r0, #CR_C +#endif +#ifdef CONFIG_CPU_BPREDICT_DISABLE + bic r0, r0, #CR_Z +#endif +#ifdef CONFIG_CPU_ICACHE_DISABLE + bic r0, r0, #CR_I +#endif + mcr p15, 0, r0, c1, c0, 0 @ write control reg + + mov pc, r13 @ clear the BSS and jump + @ to start_kernel + +#include "head-common.S" diff --git a/trunk/arch/arm/kernel/head.S b/trunk/arch/arm/kernel/head.S index 53b6901f70a6..04b66a9328ef 100644 --- a/trunk/arch/arm/kernel/head.S +++ b/trunk/arch/arm/kernel/head.S @@ -102,49 +102,6 @@ ENTRY(stext) adr lr, __enable_mmu @ return (PIC) address add pc, r10, #PROCINFO_INITFUNC - .type __switch_data, %object -__switch_data: - .long __mmap_switched - .long __data_loc @ r4 - .long __data_start @ r5 - .long __bss_start @ r6 - .long _end @ r7 - .long processor_id @ r4 - .long __machine_arch_type @ r5 - .long cr_alignment @ r6 - .long init_thread_union + THREAD_START_SP @ sp - -/* - * The following fragment of code is executed with the MMU on, and uses - * absolute addresses; this is not position independent. - * - * r0 = cp#15 control register - * r1 = machine ID - * r9 = processor ID - */ - .type __mmap_switched, %function -__mmap_switched: - adr r3, __switch_data + 4 - - ldmia r3!, {r4, r5, r6, r7} - cmp r4, r5 @ Copy data segment if needed -1: cmpne r5, r6 - ldrne fp, [r4], #4 - strne fp, [r5], #4 - bne 1b - - mov fp, #0 @ Clear BSS (and zero fp) -1: cmp r6, r7 - strcc fp, [r6],#4 - bcc 1b - - ldmia r3, {r4, r5, r6, sp} - str r9, [r4] @ Save processor ID - str r1, [r5] @ Save machine type - bic r4, r0, #CR_A @ Clear 'A' bit - stmia r6, {r0, r4} @ Save control register values - b start_kernel - #if defined(CONFIG_SMP) .type secondary_startup, #function ENTRY(secondary_startup) @@ -367,166 +324,4 @@ __create_page_tables: mov pc, lr .ltorg - - -/* - * Exception handling. Something went wrong and we can't proceed. We - * ought to tell the user, but since we don't have any guarantee that - * we're even running on the right architecture, we do virtually nothing. - * - * If CONFIG_DEBUG_LL is set we try to print out something about the error - * and hope for the best (useful if bootloader fails to pass a proper - * machine ID for example). - */ - - .type __error_p, %function -__error_p: -#ifdef CONFIG_DEBUG_LL - adr r0, str_p1 - bl printascii - b __error -str_p1: .asciz "\nError: unrecognized/unsupported processor variant.\n" - .align -#endif - - .type __error_a, %function -__error_a: -#ifdef CONFIG_DEBUG_LL - mov r4, r1 @ preserve machine ID - adr r0, str_a1 - bl printascii - mov r0, r4 - bl printhex8 - adr r0, str_a2 - bl printascii - adr r3, 3f - ldmia r3, {r4, r5, r6} @ get machine desc list - sub r4, r3, r4 @ get offset between virt&phys - add r5, r5, r4 @ convert virt addresses to - add r6, r6, r4 @ physical address space -1: ldr r0, [r5, #MACHINFO_TYPE] @ get machine type - bl printhex8 - mov r0, #'\t' - bl printch - ldr r0, [r5, #MACHINFO_NAME] @ get machine name - add r0, r0, r4 - bl printascii - mov r0, #'\n' - bl printch - add r5, r5, #SIZEOF_MACHINE_DESC @ next machine_desc - cmp r5, r6 - blo 1b - adr r0, str_a3 - bl printascii - b __error -str_a1: .asciz "\nError: unrecognized/unsupported machine ID (r1 = 0x" -str_a2: .asciz ").\n\nAvailable machine support:\n\nID (hex)\tNAME\n" -str_a3: .asciz "\nPlease check your kernel config and/or bootloader.\n" - .align -#endif - - .type __error, %function -__error: -#ifdef CONFIG_ARCH_RPC -/* - * Turn the screen red on a error - RiscPC only. - */ - mov r0, #0x02000000 - mov r3, #0x11 - orr r3, r3, r3, lsl #8 - orr r3, r3, r3, lsl #16 - str r3, [r0], #4 - str r3, [r0], #4 - str r3, [r0], #4 - str r3, [r0], #4 -#endif -1: mov r0, r0 - b 1b - - -/* - * Read processor ID register (CP#15, CR0), and look up in the linker-built - * supported processor list. Note that we can't use the absolute addresses - * for the __proc_info lists since we aren't running with the MMU on - * (and therefore, we are not in the correct address space). We have to - * calculate the offset. - * - * r9 = cpuid - * Returns: - * r3, r4, r6 corrupted - * r5 = proc_info pointer in physical address space - * r9 = cpuid (preserved) - */ - .type __lookup_processor_type, %function -__lookup_processor_type: - adr r3, 3f - ldmda r3, {r5 - r7} - sub r3, r3, r7 @ get offset between virt&phys - add r5, r5, r3 @ convert virt addresses to - add r6, r6, r3 @ physical address space -1: ldmia r5, {r3, r4} @ value, mask - and r4, r4, r9 @ mask wanted bits - teq r3, r4 - beq 2f - add r5, r5, #PROC_INFO_SZ @ sizeof(proc_info_list) - cmp r5, r6 - blo 1b - mov r5, #0 @ unknown processor -2: mov pc, lr - -/* - * This provides a C-API version of the above function. - */ -ENTRY(lookup_processor_type) - stmfd sp!, {r4 - r7, r9, lr} - mov r9, r0 - bl __lookup_processor_type - mov r0, r5 - ldmfd sp!, {r4 - r7, r9, pc} - -/* - * Look in include/asm-arm/procinfo.h and arch/arm/kernel/arch.[ch] for - * more information about the __proc_info and __arch_info structures. - */ - .long __proc_info_begin - .long __proc_info_end -3: .long . - .long __arch_info_begin - .long __arch_info_end - -/* - * Lookup machine architecture in the linker-build list of architectures. - * Note that we can't use the absolute addresses for the __arch_info - * lists since we aren't running with the MMU on (and therefore, we are - * not in the correct address space). We have to calculate the offset. - * - * r1 = machine architecture number - * Returns: - * r3, r4, r6 corrupted - * r5 = mach_info pointer in physical address space - */ - .type __lookup_machine_type, %function -__lookup_machine_type: - adr r3, 3b - ldmia r3, {r4, r5, r6} - sub r3, r3, r4 @ get offset between virt&phys - add r5, r5, r3 @ convert virt addresses to - add r6, r6, r3 @ physical address space -1: ldr r3, [r5, #MACHINFO_TYPE] @ get machine type - teq r3, r1 @ matches loader number? - beq 2f @ found - add r5, r5, #SIZEOF_MACHINE_DESC @ next machine_desc - cmp r5, r6 - blo 1b - mov r5, #0 @ unknown machine -2: mov pc, lr - -/* - * This provides a C-API version of the above function. - */ -ENTRY(lookup_machine_type) - stmfd sp!, {r4 - r6, lr} - mov r1, r0 - bl __lookup_machine_type - mov r0, r5 - ldmfd sp!, {r4 - r6, pc} +#include "head-common.S" diff --git a/trunk/arch/arm/kernel/process.c b/trunk/arch/arm/kernel/process.c index 489c069e5c3e..1ff75cee4b0d 100644 --- a/trunk/arch/arm/kernel/process.c +++ b/trunk/arch/arm/kernel/process.c @@ -474,4 +474,3 @@ unsigned long get_wchan(struct task_struct *p) } while (count ++ < 16); return 0; } -EXPORT_SYMBOL(get_wchan); diff --git a/trunk/arch/arm/kernel/signal.h b/trunk/arch/arm/kernel/signal.h index 9991049c522d..27beece15502 100644 --- a/trunk/arch/arm/kernel/signal.h +++ b/trunk/arch/arm/kernel/signal.h @@ -7,6 +7,6 @@ * it under the terms of the GNU General Public License version 2 as * published by the Free Software Foundation. */ -#define KERN_SIGRETURN_CODE 0xffff0500 +#define KERN_SIGRETURN_CODE (CONFIG_VECTORS_BASE + 0x00000500) extern const unsigned long sigreturn_codes[7]; diff --git a/trunk/arch/arm/kernel/traps.c b/trunk/arch/arm/kernel/traps.c index d566d5f4574d..35230a060108 100644 --- a/trunk/arch/arm/kernel/traps.c +++ b/trunk/arch/arm/kernel/traps.c @@ -688,6 +688,7 @@ EXPORT_SYMBOL(abort); void __init trap_init(void) { + unsigned long vectors = CONFIG_VECTORS_BASE; extern char __stubs_start[], __stubs_end[]; extern char __vectors_start[], __vectors_end[]; extern char __kuser_helper_start[], __kuser_helper_end[]; @@ -698,9 +699,9 @@ void __init trap_init(void) * into the vector page, mapped at 0xffff0000, and ensure these * are visible to the instruction stream. */ - memcpy((void *)0xffff0000, __vectors_start, __vectors_end - __vectors_start); - memcpy((void *)0xffff0200, __stubs_start, __stubs_end - __stubs_start); - memcpy((void *)0xffff1000 - kuser_sz, __kuser_helper_start, kuser_sz); + memcpy((void *)vectors, __vectors_start, __vectors_end - __vectors_start); + memcpy((void *)vectors + 0x200, __stubs_start, __stubs_end - __stubs_start); + memcpy((void *)vectors + 0x1000 - kuser_sz, __kuser_helper_start, kuser_sz); /* * Copy signal return handlers into the vector page, and @@ -709,6 +710,6 @@ void __init trap_init(void) memcpy((void *)KERN_SIGRETURN_CODE, sigreturn_codes, sizeof(sigreturn_codes)); - flush_icache_range(0xffff0000, 0xffff0000 + PAGE_SIZE); + flush_icache_range(vectors, vectors + PAGE_SIZE); modify_domain(DOMAIN_USER, DOMAIN_CLIENT); } diff --git a/trunk/arch/arm/mach-pxa/corgi.c b/trunk/arch/arm/mach-pxa/corgi.c index 68923b1d2b62..d6d726036361 100644 --- a/trunk/arch/arm/mach-pxa/corgi.c +++ b/trunk/arch/arm/mach-pxa/corgi.c @@ -141,6 +141,8 @@ struct corgissp_machinfo corgi_ssp_machinfo = { */ static struct corgibl_machinfo corgi_bl_machinfo = { .max_intensity = 0x2f, + .default_intensity = 0x1f, + .limit_mask = 0x0b, .set_bl_intensity = corgi_bl_set_intensity, }; @@ -163,6 +165,14 @@ static struct platform_device corgikbd_device = { }; +/* + * Corgi LEDs + */ +static struct platform_device corgiled_device = { + .name = "corgi-led", + .id = -1, +}; + /* * Corgi Touch Screen Device */ @@ -297,6 +307,7 @@ static struct platform_device *devices[] __initdata = { &corgikbd_device, &corgibl_device, &corgits_device, + &corgiled_device, }; static void __init corgi_init(void) diff --git a/trunk/arch/arm/mach-pxa/spitz.c b/trunk/arch/arm/mach-pxa/spitz.c index 0dbb079ecd25..19b372df544a 100644 --- a/trunk/arch/arm/mach-pxa/spitz.c +++ b/trunk/arch/arm/mach-pxa/spitz.c @@ -220,6 +220,8 @@ struct corgissp_machinfo spitz_ssp_machinfo = { * Spitz Backlight Device */ static struct corgibl_machinfo spitz_bl_machinfo = { + .default_intensity = 0x1f, + .limit_mask = 0x0b, .max_intensity = 0x2f, }; @@ -241,6 +243,14 @@ static struct platform_device spitzkbd_device = { }; +/* + * Spitz LEDs + */ +static struct platform_device spitzled_device = { + .name = "spitz-led", + .id = -1, +}; + /* * Spitz Touch Screen Device */ @@ -418,6 +428,7 @@ static struct platform_device *devices[] __initdata = { &spitzkbd_device, &spitzts_device, &spitzbl_device, + &spitzled_device, }; static void __init common_init(void) diff --git a/trunk/arch/arm/mach-pxa/tosa.c b/trunk/arch/arm/mach-pxa/tosa.c index 66ec71756d0f..76c0e7f0a219 100644 --- a/trunk/arch/arm/mach-pxa/tosa.c +++ b/trunk/arch/arm/mach-pxa/tosa.c @@ -251,10 +251,19 @@ static struct platform_device tosakbd_device = { .id = -1, }; +/* + * Tosa LEDs + */ +static struct platform_device tosaled_device = { + .name = "tosa-led", + .id = -1, +}; + static struct platform_device *devices[] __initdata = { &tosascoop_device, &tosascoop_jc_device, &tosakbd_device, + &tosaled_device, }; static void __init tosa_init(void) diff --git a/trunk/arch/arm/mm/proc-xsc3.S b/trunk/arch/arm/mm/proc-xsc3.S index f90513e9af0c..b9dfce57c272 100644 --- a/trunk/arch/arm/mm/proc-xsc3.S +++ b/trunk/arch/arm/mm/proc-xsc3.S @@ -30,6 +30,7 @@ #include #include #include +#include #include #include #include "proc-macros.S" diff --git a/trunk/arch/arm26/kernel/armksyms.c b/trunk/arch/arm26/kernel/armksyms.c index 811a6376c624..a6a1b3373444 100644 --- a/trunk/arch/arm26/kernel/armksyms.c +++ b/trunk/arch/arm26/kernel/armksyms.c @@ -212,8 +212,6 @@ EXPORT_SYMBOL(sys_open); EXPORT_SYMBOL(sys_exit); EXPORT_SYMBOL(sys_wait4); -EXPORT_SYMBOL(get_wchan); - #ifdef CONFIG_PREEMPT EXPORT_SYMBOL(kernel_flag); #endif diff --git a/trunk/arch/frv/kernel/frv_ksyms.c b/trunk/arch/frv/kernel/frv_ksyms.c index aa6b7d0a2109..07c8ffa0dd39 100644 --- a/trunk/arch/frv/kernel/frv_ksyms.c +++ b/trunk/arch/frv/kernel/frv_ksyms.c @@ -79,8 +79,6 @@ EXPORT_SYMBOL(memmove); EXPORT_SYMBOL(__outsl_ns); EXPORT_SYMBOL(__insl_ns); -EXPORT_SYMBOL(get_wchan); - #ifdef CONFIG_FRV_OUTOFLINE_ATOMIC_OPS EXPORT_SYMBOL(atomic_test_and_ANDNOT_mask); EXPORT_SYMBOL(atomic_test_and_OR_mask); diff --git a/trunk/arch/h8300/kernel/h8300_ksyms.c b/trunk/arch/h8300/kernel/h8300_ksyms.c index 69d6ad32d56c..b6cd78c972bb 100644 --- a/trunk/arch/h8300/kernel/h8300_ksyms.c +++ b/trunk/arch/h8300/kernel/h8300_ksyms.c @@ -55,8 +55,6 @@ EXPORT_SYMBOL(memcmp); EXPORT_SYMBOL(memscan); EXPORT_SYMBOL(memmove); -EXPORT_SYMBOL(get_wchan); - /* * libgcc functions - functions that are used internally by the * compiler... (prototypes are not correct though, but that diff --git a/trunk/arch/i386/kernel/apic.c b/trunk/arch/i386/kernel/apic.c index eb5279d23b7f..6273bf74c203 100644 --- a/trunk/arch/i386/kernel/apic.c +++ b/trunk/arch/i386/kernel/apic.c @@ -415,6 +415,7 @@ void __init init_bsp_APIC(void) void __devinit setup_local_APIC(void) { unsigned long oldvalue, value, ver, maxlvt; + int i, j; /* Pound the ESR really hard over the head with a big hammer - mbligh */ if (esr_disable) { @@ -451,6 +452,25 @@ void __devinit setup_local_APIC(void) value &= ~APIC_TPRI_MASK; apic_write_around(APIC_TASKPRI, value); + /* + * After a crash, we no longer service the interrupts and a pending + * interrupt from previous kernel might still have ISR bit set. + * + * Most probably by now CPU has serviced that pending interrupt and + * it might not have done the ack_APIC_irq() because it thought, + * interrupt came from i8259 as ExtInt. LAPIC did not get EOI so it + * does not clear the ISR bit and cpu thinks it has already serivced + * the interrupt. Hence a vector might get locked. It was noticed + * for timer irq (vector 0x31). Issue an extra EOI to clear ISR. + */ + for (i = APIC_ISR_NR - 1; i >= 0; i--) { + value = apic_read(APIC_ISR + i*0x10); + for (j = 31; j >= 0; j--) { + if (value & (1< #include #include +#include #include #include #include @@ -1547,6 +1548,23 @@ void __init setup_arch(char **cmdline_p) #endif } +static __init int add_pcspkr(void) +{ + struct platform_device *pd; + int ret; + + pd = platform_device_alloc("pcspkr", -1); + if (!pd) + return -ENOMEM; + + ret = platform_device_add(pd); + if (ret) + platform_device_put(pd); + + return ret; +} +device_initcall(add_pcspkr); + #include "setup_arch_post.h" /* * Local Variables: diff --git a/trunk/arch/i386/kernel/syscall_table.S b/trunk/arch/i386/kernel/syscall_table.S index ce3ef4fa0551..4f58b9c0efe3 100644 --- a/trunk/arch/i386/kernel/syscall_table.S +++ b/trunk/arch/i386/kernel/syscall_table.S @@ -313,3 +313,4 @@ ENTRY(sys_call_table) .long sys_set_robust_list .long sys_get_robust_list .long sys_splice + .long sys_sync_file_range diff --git a/trunk/arch/i386/kernel/traps.c b/trunk/arch/i386/kernel/traps.c index 6b63a5aa1e46..e38527994590 100644 --- a/trunk/arch/i386/kernel/traps.c +++ b/trunk/arch/i386/kernel/traps.c @@ -1193,6 +1193,6 @@ void __init trap_init(void) static int __init kstack_setup(char *s) { kstack_depth_to_print = simple_strtoul(s, NULL, 0); - return 0; + return 1; } __setup("kstack=", kstack_setup); diff --git a/trunk/arch/i386/kernel/vsyscall-sigreturn.S b/trunk/arch/i386/kernel/vsyscall-sigreturn.S index fadb5bc3c374..a92262f41659 100644 --- a/trunk/arch/i386/kernel/vsyscall-sigreturn.S +++ b/trunk/arch/i386/kernel/vsyscall-sigreturn.S @@ -44,7 +44,7 @@ __kernel_rt_sigreturn: .LSTARTCIEDLSI1: .long 0 /* CIE ID */ .byte 1 /* Version number */ - .string "zR" /* NUL-terminated augmentation string */ + .string "zRS" /* NUL-terminated augmentation string */ .uleb128 1 /* Code alignment factor */ .sleb128 -4 /* Data alignment factor */ .byte 8 /* Return address register column */ diff --git a/trunk/arch/ia64/kernel/palinfo.c b/trunk/arch/ia64/kernel/palinfo.c index 89faa603c6be..6386f63c413e 100644 --- a/trunk/arch/ia64/kernel/palinfo.c +++ b/trunk/arch/ia64/kernel/palinfo.c @@ -240,7 +240,7 @@ cache_info(char *page) } p += sprintf(p, "%s Cache level %lu:\n" - "\tSize : %lu bytes\n" + "\tSize : %u bytes\n" "\tAttributes : ", cache_types[j+cci.pcci_unified], i+1, cci.pcci_cache_size); @@ -648,9 +648,9 @@ frequency_info(char *page) if (ia64_pal_freq_ratios(&proc, &bus, &itc) != 0) return 0; p += sprintf(p, - "Processor/Clock ratio : %ld/%ld\n" - "Bus/Clock ratio : %ld/%ld\n" - "ITC/Clock ratio : %ld/%ld\n", + "Processor/Clock ratio : %d/%d\n" + "Bus/Clock ratio : %d/%d\n" + "ITC/Clock ratio : %d/%d\n", proc.num, proc.den, bus.num, bus.den, itc.num, itc.den); return p - page; diff --git a/trunk/arch/ia64/kernel/time.c b/trunk/arch/ia64/kernel/time.c index ac167436e936..49958904045b 100644 --- a/trunk/arch/ia64/kernel/time.c +++ b/trunk/arch/ia64/kernel/time.c @@ -188,7 +188,7 @@ ia64_init_itm (void) itc_freq = (platform_base_freq*itc_ratio.num)/itc_ratio.den; local_cpu_data->itm_delta = (itc_freq + HZ/2) / HZ; - printk(KERN_DEBUG "CPU %d: base freq=%lu.%03luMHz, ITC ratio=%lu/%lu, " + printk(KERN_DEBUG "CPU %d: base freq=%lu.%03luMHz, ITC ratio=%u/%u, " "ITC freq=%lu.%03luMHz", smp_processor_id(), platform_base_freq / 1000000, (platform_base_freq / 1000) % 1000, itc_ratio.num, itc_ratio.den, itc_freq / 1000000, (itc_freq / 1000) % 1000); diff --git a/trunk/arch/ia64/kernel/topology.c b/trunk/arch/ia64/kernel/topology.c index 3b6fd798c4d6..b47476d655f1 100644 --- a/trunk/arch/ia64/kernel/topology.c +++ b/trunk/arch/ia64/kernel/topology.c @@ -9,6 +9,8 @@ * 2002/08/07 Erich Focht * Populate cpu entries in sysfs for non-numa systems as well * Intel Corporation - Ashok Raj + * 02/27/2006 Zhang, Yanmin + * Populate cpu cache entries in sysfs for cpu cache info */ #include @@ -19,6 +21,7 @@ #include #include #include +#include #include #include #include @@ -101,3 +104,367 @@ static int __init topology_init(void) } subsys_initcall(topology_init); + + +/* + * Export cpu cache information through sysfs + */ + +/* + * A bunch of string array to get pretty printing + */ +static const char *cache_types[] = { + "", /* not used */ + "Instruction", + "Data", + "Unified" /* unified */ +}; + +static const char *cache_mattrib[]={ + "WriteThrough", + "WriteBack", + "", /* reserved */ + "" /* reserved */ +}; + +struct cache_info { + pal_cache_config_info_t cci; + cpumask_t shared_cpu_map; + int level; + int type; + struct kobject kobj; +}; + +struct cpu_cache_info { + struct cache_info *cache_leaves; + int num_cache_leaves; + struct kobject kobj; +}; + +static struct cpu_cache_info all_cpu_cache_info[NR_CPUS]; +#define LEAF_KOBJECT_PTR(x,y) (&all_cpu_cache_info[x].cache_leaves[y]) + +#ifdef CONFIG_SMP +static void cache_shared_cpu_map_setup( unsigned int cpu, + struct cache_info * this_leaf) +{ + pal_cache_shared_info_t csi; + int num_shared, i = 0; + unsigned int j; + + if (cpu_data(cpu)->threads_per_core <= 1 && + cpu_data(cpu)->cores_per_socket <= 1) { + cpu_set(cpu, this_leaf->shared_cpu_map); + return; + } + + if (ia64_pal_cache_shared_info(this_leaf->level, + this_leaf->type, + 0, + &csi) != PAL_STATUS_SUCCESS) + return; + + num_shared = (int) csi.num_shared; + do { + for_each_cpu(j) + if (cpu_data(cpu)->socket_id == cpu_data(j)->socket_id + && cpu_data(j)->core_id == csi.log1_cid + && cpu_data(j)->thread_id == csi.log1_tid) + cpu_set(j, this_leaf->shared_cpu_map); + + i++; + } while (i < num_shared && + ia64_pal_cache_shared_info(this_leaf->level, + this_leaf->type, + i, + &csi) == PAL_STATUS_SUCCESS); +} +#else +static void cache_shared_cpu_map_setup(unsigned int cpu, + struct cache_info * this_leaf) +{ + cpu_set(cpu, this_leaf->shared_cpu_map); + return; +} +#endif + +static ssize_t show_coherency_line_size(struct cache_info *this_leaf, + char *buf) +{ + return sprintf(buf, "%u\n", 1 << this_leaf->cci.pcci_line_size); +} + +static ssize_t show_ways_of_associativity(struct cache_info *this_leaf, + char *buf) +{ + return sprintf(buf, "%u\n", this_leaf->cci.pcci_assoc); +} + +static ssize_t show_attributes(struct cache_info *this_leaf, char *buf) +{ + return sprintf(buf, + "%s\n", + cache_mattrib[this_leaf->cci.pcci_cache_attr]); +} + +static ssize_t show_size(struct cache_info *this_leaf, char *buf) +{ + return sprintf(buf, "%uK\n", this_leaf->cci.pcci_cache_size / 1024); +} + +static ssize_t show_number_of_sets(struct cache_info *this_leaf, char *buf) +{ + unsigned number_of_sets = this_leaf->cci.pcci_cache_size; + number_of_sets /= this_leaf->cci.pcci_assoc; + number_of_sets /= 1 << this_leaf->cci.pcci_line_size; + + return sprintf(buf, "%u\n", number_of_sets); +} + +static ssize_t show_shared_cpu_map(struct cache_info *this_leaf, char *buf) +{ + ssize_t len; + cpumask_t shared_cpu_map; + + cpus_and(shared_cpu_map, this_leaf->shared_cpu_map, cpu_online_map); + len = cpumask_scnprintf(buf, NR_CPUS+1, shared_cpu_map); + len += sprintf(buf+len, "\n"); + return len; +} + +static ssize_t show_type(struct cache_info *this_leaf, char *buf) +{ + int type = this_leaf->type + this_leaf->cci.pcci_unified; + return sprintf(buf, "%s\n", cache_types[type]); +} + +static ssize_t show_level(struct cache_info *this_leaf, char *buf) +{ + return sprintf(buf, "%u\n", this_leaf->level); +} + +struct cache_attr { + struct attribute attr; + ssize_t (*show)(struct cache_info *, char *); + ssize_t (*store)(struct cache_info *, const char *, size_t count); +}; + +#ifdef define_one_ro + #undef define_one_ro +#endif +#define define_one_ro(_name) \ + static struct cache_attr _name = \ +__ATTR(_name, 0444, show_##_name, NULL) + +define_one_ro(level); +define_one_ro(type); +define_one_ro(coherency_line_size); +define_one_ro(ways_of_associativity); +define_one_ro(size); +define_one_ro(number_of_sets); +define_one_ro(shared_cpu_map); +define_one_ro(attributes); + +static struct attribute * cache_default_attrs[] = { + &type.attr, + &level.attr, + &coherency_line_size.attr, + &ways_of_associativity.attr, + &attributes.attr, + &size.attr, + &number_of_sets.attr, + &shared_cpu_map.attr, + NULL +}; + +#define to_object(k) container_of(k, struct cache_info, kobj) +#define to_attr(a) container_of(a, struct cache_attr, attr) + +static ssize_t cache_show(struct kobject * kobj, struct attribute * attr, char * buf) +{ + struct cache_attr *fattr = to_attr(attr); + struct cache_info *this_leaf = to_object(kobj); + ssize_t ret; + + ret = fattr->show ? fattr->show(this_leaf, buf) : 0; + return ret; +} + +static struct sysfs_ops cache_sysfs_ops = { + .show = cache_show +}; + +static struct kobj_type cache_ktype = { + .sysfs_ops = &cache_sysfs_ops, + .default_attrs = cache_default_attrs, +}; + +static struct kobj_type cache_ktype_percpu_entry = { + .sysfs_ops = &cache_sysfs_ops, +}; + +static void __cpuinit cpu_cache_sysfs_exit(unsigned int cpu) +{ + if (all_cpu_cache_info[cpu].cache_leaves) { + kfree(all_cpu_cache_info[cpu].cache_leaves); + all_cpu_cache_info[cpu].cache_leaves = NULL; + } + all_cpu_cache_info[cpu].num_cache_leaves = 0; + memset(&all_cpu_cache_info[cpu].kobj, 0, sizeof(struct kobject)); + + return; +} + +static int __cpuinit cpu_cache_sysfs_init(unsigned int cpu) +{ + u64 i, levels, unique_caches; + pal_cache_config_info_t cci; + int j; + s64 status; + struct cache_info *this_cache; + int num_cache_leaves = 0; + + if ((status = ia64_pal_cache_summary(&levels, &unique_caches)) != 0) { + printk(KERN_ERR "ia64_pal_cache_summary=%ld\n", status); + return -1; + } + + this_cache=kzalloc(sizeof(struct cache_info)*unique_caches, + GFP_KERNEL); + if (this_cache == NULL) + return -ENOMEM; + + for (i=0; i < levels; i++) { + for (j=2; j >0 ; j--) { + if ((status=ia64_pal_cache_config_info(i,j, &cci)) != + PAL_STATUS_SUCCESS) + continue; + + this_cache[num_cache_leaves].cci = cci; + this_cache[num_cache_leaves].level = i + 1; + this_cache[num_cache_leaves].type = j; + + cache_shared_cpu_map_setup(cpu, + &this_cache[num_cache_leaves]); + num_cache_leaves ++; + } + } + + all_cpu_cache_info[cpu].cache_leaves = this_cache; + all_cpu_cache_info[cpu].num_cache_leaves = num_cache_leaves; + + memset(&all_cpu_cache_info[cpu].kobj, 0, sizeof(struct kobject)); + + return 0; +} + +/* Add cache interface for CPU device */ +static int __cpuinit cache_add_dev(struct sys_device * sys_dev) +{ + unsigned int cpu = sys_dev->id; + unsigned long i, j; + struct cache_info *this_object; + int retval = 0; + cpumask_t oldmask; + + if (all_cpu_cache_info[cpu].kobj.parent) + return 0; + + oldmask = current->cpus_allowed; + retval = set_cpus_allowed(current, cpumask_of_cpu(cpu)); + if (unlikely(retval)) + return retval; + + retval = cpu_cache_sysfs_init(cpu); + set_cpus_allowed(current, oldmask); + if (unlikely(retval < 0)) + return retval; + + all_cpu_cache_info[cpu].kobj.parent = &sys_dev->kobj; + kobject_set_name(&all_cpu_cache_info[cpu].kobj, "%s", "cache"); + all_cpu_cache_info[cpu].kobj.ktype = &cache_ktype_percpu_entry; + retval = kobject_register(&all_cpu_cache_info[cpu].kobj); + + for (i = 0; i < all_cpu_cache_info[cpu].num_cache_leaves; i++) { + this_object = LEAF_KOBJECT_PTR(cpu,i); + this_object->kobj.parent = &all_cpu_cache_info[cpu].kobj; + kobject_set_name(&(this_object->kobj), "index%1lu", i); + this_object->kobj.ktype = &cache_ktype; + retval = kobject_register(&(this_object->kobj)); + if (unlikely(retval)) { + for (j = 0; j < i; j++) { + kobject_unregister( + &(LEAF_KOBJECT_PTR(cpu,j)->kobj)); + } + kobject_unregister(&all_cpu_cache_info[cpu].kobj); + cpu_cache_sysfs_exit(cpu); + break; + } + } + return retval; +} + +/* Remove cache interface for CPU device */ +static int __cpuinit cache_remove_dev(struct sys_device * sys_dev) +{ + unsigned int cpu = sys_dev->id; + unsigned long i; + + for (i = 0; i < all_cpu_cache_info[cpu].num_cache_leaves; i++) + kobject_unregister(&(LEAF_KOBJECT_PTR(cpu,i)->kobj)); + + if (all_cpu_cache_info[cpu].kobj.parent) { + kobject_unregister(&all_cpu_cache_info[cpu].kobj); + memset(&all_cpu_cache_info[cpu].kobj, + 0, + sizeof(struct kobject)); + } + + cpu_cache_sysfs_exit(cpu); + + return 0; +} + +/* + * When a cpu is hot-plugged, do a check and initiate + * cache kobject if necessary + */ +static int __cpuinit cache_cpu_callback(struct notifier_block *nfb, + unsigned long action, void *hcpu) +{ + unsigned int cpu = (unsigned long)hcpu; + struct sys_device *sys_dev; + + sys_dev = get_cpu_sysdev(cpu); + switch (action) { + case CPU_ONLINE: + cache_add_dev(sys_dev); + break; + case CPU_DEAD: + cache_remove_dev(sys_dev); + break; + } + return NOTIFY_OK; +} + +static struct notifier_block cache_cpu_notifier = +{ + .notifier_call = cache_cpu_callback +}; + +static int __cpuinit cache_sysfs_init(void) +{ + int i; + + for_each_online_cpu(i) { + cache_cpu_callback(&cache_cpu_notifier, CPU_ONLINE, + (void *)(long)i); + } + + register_cpu_notifier(&cache_cpu_notifier); + + return 0; +} + +device_initcall(cache_sysfs_init); + diff --git a/trunk/arch/m68k/kernel/m68k_ksyms.c b/trunk/arch/m68k/kernel/m68k_ksyms.c index 3d7f2000b714..c3319514a85e 100644 --- a/trunk/arch/m68k/kernel/m68k_ksyms.c +++ b/trunk/arch/m68k/kernel/m68k_ksyms.c @@ -79,4 +79,3 @@ EXPORT_SYMBOL(__down_failed_interruptible); EXPORT_SYMBOL(__down_failed_trylock); EXPORT_SYMBOL(__up_wakeup); -EXPORT_SYMBOL(get_wchan); diff --git a/trunk/arch/m68knommu/kernel/m68k_ksyms.c b/trunk/arch/m68knommu/kernel/m68k_ksyms.c index d844c755945a..f9b4ea16c099 100644 --- a/trunk/arch/m68knommu/kernel/m68k_ksyms.c +++ b/trunk/arch/m68knommu/kernel/m68k_ksyms.c @@ -57,8 +57,6 @@ EXPORT_SYMBOL(__down_failed_interruptible); EXPORT_SYMBOL(__down_failed_trylock); EXPORT_SYMBOL(__up_wakeup); -EXPORT_SYMBOL(get_wchan); - /* * libgcc functions - functions that are used internally by the * compiler... (prototypes are not correct though, but that diff --git a/trunk/arch/mips/Kconfig b/trunk/arch/mips/Kconfig index 5080ea1799a4..e15709ce8866 100644 --- a/trunk/arch/mips/Kconfig +++ b/trunk/arch/mips/Kconfig @@ -233,6 +233,7 @@ config MACH_JAZZ select ARC32 select ARCH_MAY_HAVE_PC_FDC select GENERIC_ISA_DMA + select I8253 select I8259 select ISA select SYS_HAS_CPU_R4X00 @@ -530,6 +531,7 @@ config QEMU select DMA_COHERENT select GENERIC_ISA_DMA select HAVE_STD_PC_SERIAL_PORT + select I8253 select I8259 select ISA select SWAP_IO_SPACE @@ -714,6 +716,7 @@ config SNI_RM200_PCI select HAVE_STD_PC_SERIAL_PORT select HW_HAS_EISA select HW_HAS_PCI + select I8253 select I8259 select ISA select SYS_HAS_CPU_R4X00 @@ -1721,6 +1724,9 @@ config MMU bool default y +config I8253 + bool + source "drivers/pcmcia/Kconfig" source "drivers/pci/hotplug/Kconfig" diff --git a/trunk/arch/mips/kernel/Makefile b/trunk/arch/mips/kernel/Makefile index f36c4f20ee8a..309d54cceda3 100644 --- a/trunk/arch/mips/kernel/Makefile +++ b/trunk/arch/mips/kernel/Makefile @@ -59,6 +59,8 @@ obj-$(CONFIG_PROC_FS) += proc.o obj-$(CONFIG_64BIT) += cpu-bugs64.o +obj-$(CONFIG_I8253) += i8253.o + CFLAGS_cpu-bugs64.o = $(shell if $(CC) $(CFLAGS) -Wa,-mdaddi -c -o /dev/null -xc /dev/null >/dev/null 2>&1; then echo "-DHAVE_AS_SET_DADDI"; fi) EXTRA_AFLAGS := $(CFLAGS) diff --git a/trunk/arch/mips/kernel/i8253.c b/trunk/arch/mips/kernel/i8253.c new file mode 100644 index 000000000000..475df6904219 --- /dev/null +++ b/trunk/arch/mips/kernel/i8253.c @@ -0,0 +1,28 @@ +/* + * Copyright (C) 2006 IBM Corporation + * + * Implements device information for i8253 timer chip + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License version + * 2 as published by the Free Software Foundation + */ + +#include + +static __init int add_pcspkr(void) +{ + struct platform_device *pd; + int ret; + + pd = platform_device_alloc("pcspkr", -1); + if (!pd) + return -ENOMEM; + + ret = platform_device_add(pd); + if (ret) + platform_device_put(pd); + + return ret; +} +device_initcall(add_pcspkr); diff --git a/trunk/arch/mips/kernel/process.c b/trunk/arch/mips/kernel/process.c index a8f435d82940..c66db5e5ab62 100644 --- a/trunk/arch/mips/kernel/process.c +++ b/trunk/arch/mips/kernel/process.c @@ -419,4 +419,3 @@ unsigned long get_wchan(struct task_struct *p) return pc; } -EXPORT_SYMBOL(get_wchan); diff --git a/trunk/arch/powerpc/kernel/crash_dump.c b/trunk/arch/powerpc/kernel/crash_dump.c index 211d72653ea6..764d07329716 100644 --- a/trunk/arch/powerpc/kernel/crash_dump.c +++ b/trunk/arch/powerpc/kernel/crash_dump.c @@ -61,7 +61,7 @@ static int __init parse_elfcorehdr(char *p) if (p) elfcorehdr_addr = memparse(p, &p); - return 0; + return 1; } __setup("elfcorehdr=", parse_elfcorehdr); #endif @@ -71,7 +71,7 @@ static int __init parse_savemaxmem(char *p) if (p) saved_max_pfn = (memparse(p, &p) >> PAGE_SHIFT) - 1; - return 0; + return 1; } __setup("savemaxmem=", parse_savemaxmem); diff --git a/trunk/arch/powerpc/kernel/lparcfg.c b/trunk/arch/powerpc/kernel/lparcfg.c index 1b73508ecb2b..2cbde865d4f5 100644 --- a/trunk/arch/powerpc/kernel/lparcfg.c +++ b/trunk/arch/powerpc/kernel/lparcfg.c @@ -37,7 +37,7 @@ #include #include -#define MODULE_VERS "1.6" +#define MODULE_VERS "1.7" #define MODULE_NAME "lparcfg" /* #define LPARCFG_DEBUG */ @@ -149,17 +149,17 @@ static void log_plpar_hcall_return(unsigned long rc, char *tag) if (rc == 0) /* success, return */ return; /* check for null tag ? */ - if (rc == H_Hardware) + if (rc == H_HARDWARE) printk(KERN_INFO "plpar-hcall (%s) failed with hardware fault\n", tag); - else if (rc == H_Function) + else if (rc == H_FUNCTION) printk(KERN_INFO "plpar-hcall (%s) failed; function not allowed\n", tag); - else if (rc == H_Authority) + else if (rc == H_AUTHORITY) printk(KERN_INFO - "plpar-hcall (%s) failed; not authorized to this function\n", - tag); - else if (rc == H_Parameter) + "plpar-hcall (%s) failed; not authorized to this" + " function\n", tag); + else if (rc == H_PARAMETER) printk(KERN_INFO "plpar-hcall (%s) failed; Bad parameter(s)\n", tag); else @@ -209,7 +209,7 @@ static void h_pic(unsigned long *pool_idle_time, unsigned long *num_procs) unsigned long dummy; rc = plpar_hcall(H_PIC, 0, 0, 0, 0, pool_idle_time, num_procs, &dummy); - if (rc != H_Authority) + if (rc != H_AUTHORITY) log_plpar_hcall_return(rc, "H_PIC"); } @@ -242,7 +242,7 @@ static void parse_system_parameter_string(struct seq_file *m) { int call_status; - char *local_buffer = kmalloc(SPLPAR_MAXLENGTH, GFP_KERNEL); + unsigned char *local_buffer = kmalloc(SPLPAR_MAXLENGTH, GFP_KERNEL); if (!local_buffer) { printk(KERN_ERR "%s %s kmalloc failure at line %d \n", __FILE__, __FUNCTION__, __LINE__); @@ -254,7 +254,8 @@ static void parse_system_parameter_string(struct seq_file *m) call_status = rtas_call(rtas_token("ibm,get-system-parameter"), 3, 1, NULL, SPLPAR_CHARACTERISTICS_TOKEN, - __pa(rtas_data_buf)); + __pa(rtas_data_buf), + RTAS_DATA_BUF_SIZE); memcpy(local_buffer, rtas_data_buf, SPLPAR_MAXLENGTH); spin_unlock(&rtas_data_buf_lock); @@ -275,7 +276,7 @@ static void parse_system_parameter_string(struct seq_file *m) #ifdef LPARCFG_DEBUG printk(KERN_INFO "success calling get-system-parameter \n"); #endif - splpar_strlen = local_buffer[0] * 16 + local_buffer[1]; + splpar_strlen = local_buffer[0] * 256 + local_buffer[1]; local_buffer += 2; /* step over strlen value */ memset(workbuffer, 0, SPLPAR_MAXLENGTH); @@ -529,13 +530,13 @@ static ssize_t lparcfg_write(struct file *file, const char __user * buf, retval = plpar_hcall_norets(H_SET_PPP, *new_entitled_ptr, *new_weight_ptr); - if (retval == H_Success || retval == H_Constrained) { + if (retval == H_SUCCESS || retval == H_CONSTRAINED) { retval = count; - } else if (retval == H_Busy) { + } else if (retval == H_BUSY) { retval = -EBUSY; - } else if (retval == H_Hardware) { + } else if (retval == H_HARDWARE) { retval = -EIO; - } else if (retval == H_Parameter) { + } else if (retval == H_PARAMETER) { retval = -EINVAL; } else { printk(KERN_WARNING "%s: received unknown hv return code %ld", diff --git a/trunk/arch/powerpc/kernel/process.c b/trunk/arch/powerpc/kernel/process.c index 706090c99f47..2dd47d2dd998 100644 --- a/trunk/arch/powerpc/kernel/process.c +++ b/trunk/arch/powerpc/kernel/process.c @@ -834,7 +834,6 @@ unsigned long get_wchan(struct task_struct *p) } while (count++ < 16); return 0; } -EXPORT_SYMBOL(get_wchan); static int kstack_depth_to_print = 64; diff --git a/trunk/arch/powerpc/kernel/rtas.c b/trunk/arch/powerpc/kernel/rtas.c index 06636c927a7e..0112318213ab 100644 --- a/trunk/arch/powerpc/kernel/rtas.c +++ b/trunk/arch/powerpc/kernel/rtas.c @@ -578,18 +578,18 @@ static void rtas_percpu_suspend_me(void *info) * We use "waiting" to indicate our state. As long * as it is >0, we are still trying to all join up. * If it goes to 0, we have successfully joined up and - * one thread got H_Continue. If any error happens, + * one thread got H_CONTINUE. If any error happens, * we set it to <0. */ local_irq_save(flags); do { rc = plpar_hcall_norets(H_JOIN); smp_rmb(); - } while (rc == H_Success && data->waiting > 0); - if (rc == H_Success) + } while (rc == H_SUCCESS && data->waiting > 0); + if (rc == H_SUCCESS) goto out; - if (rc == H_Continue) { + if (rc == H_CONTINUE) { data->waiting = 0; data->args->args[data->args->nargs] = rtas_call(ibm_suspend_me_token, 0, 1, NULL); @@ -597,7 +597,7 @@ static void rtas_percpu_suspend_me(void *info) plpar_hcall_norets(H_PROD,i); } else { data->waiting = -EBUSY; - printk(KERN_ERR "Error on H_Join hypervisor call\n"); + printk(KERN_ERR "Error on H_JOIN hypervisor call\n"); } out: @@ -624,7 +624,7 @@ static int rtas_ibm_suspend_me(struct rtas_args *args) printk(KERN_ERR "Error doing global join\n"); /* Prod each CPU. This won't hurt, and will wake - * anyone we successfully put to sleep with H_Join + * anyone we successfully put to sleep with H_JOIN. */ for_each_possible_cpu(i) plpar_hcall_norets(H_PROD, i); diff --git a/trunk/arch/powerpc/kernel/setup-common.c b/trunk/arch/powerpc/kernel/setup-common.c index c607f3b9ca17..1d93e73a7003 100644 --- a/trunk/arch/powerpc/kernel/setup-common.c +++ b/trunk/arch/powerpc/kernel/setup-common.c @@ -21,6 +21,7 @@ #include #include #include +#include #include #include #include @@ -462,6 +463,29 @@ static int __init early_xmon(char *p) early_param("xmon", early_xmon); #endif +static __init int add_pcspkr(void) +{ + struct device_node *np; + struct platform_device *pd; + int ret; + + np = of_find_compatible_node(NULL, NULL, "pnpPNP,100"); + of_node_put(np); + if (!np) + return -ENODEV; + + pd = platform_device_alloc("pcspkr", -1); + if (!pd) + return -ENOMEM; + + ret = platform_device_add(pd); + if (ret) + platform_device_put(pd); + + return ret; +} +device_initcall(add_pcspkr); + void probe_machine(void) { extern struct machdep_calls __machine_desc_start; diff --git a/trunk/arch/powerpc/kernel/setup_32.c b/trunk/arch/powerpc/kernel/setup_32.c index a72bf5dceeee..69ac25701344 100644 --- a/trunk/arch/powerpc/kernel/setup_32.c +++ b/trunk/arch/powerpc/kernel/setup_32.c @@ -50,7 +50,6 @@ #include #endif -extern void platform_init(void); extern void bootx_init(unsigned long r4, unsigned long phys); boot_infos_t *boot_infos; @@ -138,12 +137,7 @@ void __init machine_init(unsigned long dt_ptr, unsigned long phys) strlcpy(cmd_line, CONFIG_CMDLINE, sizeof(cmd_line)); #endif /* CONFIG_CMDLINE */ -#ifdef CONFIG_PPC_MULTIPLATFORM probe_machine(); -#else - /* Base init based on machine type. Obsoloete, please kill ! */ - platform_init(); -#endif #ifdef CONFIG_6xx if (cpu_has_feature(CPU_FTR_CAN_DOZE) || diff --git a/trunk/arch/powerpc/kernel/setup_64.c b/trunk/arch/powerpc/kernel/setup_64.c index 59aa92cd6fa4..13e91c4d70a8 100644 --- a/trunk/arch/powerpc/kernel/setup_64.c +++ b/trunk/arch/powerpc/kernel/setup_64.c @@ -215,12 +215,10 @@ void __init early_setup(unsigned long dt_ptr) /* * Initialize stab / SLB management except on iSeries */ - if (!firmware_has_feature(FW_FEATURE_ISERIES)) { - if (cpu_has_feature(CPU_FTR_SLB)) - slb_initialize(); - else - stab_initialize(get_paca()->stab_real); - } + if (cpu_has_feature(CPU_FTR_SLB)) + slb_initialize(); + else if (!firmware_has_feature(FW_FEATURE_ISERIES)) + stab_initialize(get_paca()->stab_real); DBG(" <- early_setup()\n"); } diff --git a/trunk/arch/powerpc/kernel/systbl.S b/trunk/arch/powerpc/kernel/systbl.S index 1ad55f0466fd..1424eab450ee 100644 --- a/trunk/arch/powerpc/kernel/systbl.S +++ b/trunk/arch/powerpc/kernel/systbl.S @@ -322,3 +322,4 @@ SYSCALL(spu_create) COMPAT_SYS(pselect6) COMPAT_SYS(ppoll) SYSCALL(unshare) +SYSCALL(splice) diff --git a/trunk/arch/powerpc/kernel/traps.c b/trunk/arch/powerpc/kernel/traps.c index 4cbde211eb69..064a52564692 100644 --- a/trunk/arch/powerpc/kernel/traps.c +++ b/trunk/arch/powerpc/kernel/traps.c @@ -228,7 +228,7 @@ void system_reset_exception(struct pt_regs *regs) */ static inline int check_io_access(struct pt_regs *regs) { -#ifdef CONFIG_PPC_PMAC +#if defined(CONFIG_PPC_PMAC) && defined(CONFIG_PPC32) unsigned long msr = regs->msr; const struct exception_table_entry *entry; unsigned int *nip = (unsigned int *)regs->nip; @@ -261,7 +261,7 @@ static inline int check_io_access(struct pt_regs *regs) return 1; } } -#endif /* CONFIG_PPC_PMAC */ +#endif /* CONFIG_PPC_PMAC && CONFIG_PPC32 */ return 0; } @@ -308,8 +308,8 @@ platform_machine_check(struct pt_regs *regs) void machine_check_exception(struct pt_regs *regs) { -#ifdef CONFIG_PPC64 int recover = 0; + unsigned long reason = get_mc_reason(regs); /* See if any machine dependent calls */ if (ppc_md.machine_check_exception) @@ -317,8 +317,6 @@ void machine_check_exception(struct pt_regs *regs) if (recover) return; -#else - unsigned long reason = get_mc_reason(regs); if (user_mode(regs)) { regs->msr |= MSR_RI; @@ -462,7 +460,6 @@ void machine_check_exception(struct pt_regs *regs) * additional info, e.g. bus error registers. */ platform_machine_check(regs); -#endif /* CONFIG_PPC64 */ if (debugger_fault_handler(regs)) return; diff --git a/trunk/arch/powerpc/kernel/vdso32/sigtramp.S b/trunk/arch/powerpc/kernel/vdso32/sigtramp.S index e04642781917..0c6a37b29dde 100644 --- a/trunk/arch/powerpc/kernel/vdso32/sigtramp.S +++ b/trunk/arch/powerpc/kernel/vdso32/sigtramp.S @@ -261,7 +261,7 @@ V_FUNCTION_END(__kernel_sigtramp_rt32) .Lcie_start: .long 0 /* CIE ID */ .byte 1 /* Version number */ - .string "zR" /* NUL-terminated augmentation string */ + .string "zRS" /* NUL-terminated augmentation string */ .uleb128 4 /* Code alignment factor */ .sleb128 -4 /* Data alignment factor */ .byte 67 /* Return address register column, ap */ diff --git a/trunk/arch/powerpc/kernel/vdso64/sigtramp.S b/trunk/arch/powerpc/kernel/vdso64/sigtramp.S index 31b604ab56de..7479edb101b8 100644 --- a/trunk/arch/powerpc/kernel/vdso64/sigtramp.S +++ b/trunk/arch/powerpc/kernel/vdso64/sigtramp.S @@ -263,7 +263,7 @@ V_FUNCTION_END(__kernel_sigtramp_rt64) .Lcie_start: .long 0 /* CIE ID */ .byte 1 /* Version number */ - .string "zR" /* NUL-terminated augmentation string */ + .string "zRS" /* NUL-terminated augmentation string */ .uleb128 4 /* Code alignment factor */ .sleb128 -8 /* Data alignment factor */ .byte 67 /* Return address register column, ap */ diff --git a/trunk/arch/powerpc/mm/fault.c b/trunk/arch/powerpc/mm/fault.c index 5aea0909a5ec..fdbba4206d59 100644 --- a/trunk/arch/powerpc/mm/fault.c +++ b/trunk/arch/powerpc/mm/fault.c @@ -177,15 +177,15 @@ int __kprobes do_page_fault(struct pt_regs *regs, unsigned long address, /* When running in the kernel we expect faults to occur only to * addresses in user space. All other faults represent errors in the - * kernel and should generate an OOPS. Unfortunatly, in the case of an - * erroneous fault occuring in a code path which already holds mmap_sem + * kernel and should generate an OOPS. Unfortunately, in the case of an + * erroneous fault occurring in a code path which already holds mmap_sem * we will deadlock attempting to validate the fault against the * address space. Luckily the kernel only validly references user * space from well defined areas of code, which are listed in the * exceptions table. * * As the vast majority of faults will be valid we will only perform - * the source reference check when there is a possibilty of a deadlock. + * the source reference check when there is a possibility of a deadlock. * Attempt to lock the address space, if we cannot we then validate the * source. If this is invalid we can skip the address space check, * thus avoiding the deadlock. diff --git a/trunk/arch/powerpc/platforms/83xx/mpc834x_sys.c b/trunk/arch/powerpc/platforms/83xx/mpc834x_sys.c index 7c18b4cd5db4..7e789d2420ba 100644 --- a/trunk/arch/powerpc/platforms/83xx/mpc834x_sys.c +++ b/trunk/arch/powerpc/platforms/83xx/mpc834x_sys.c @@ -158,25 +158,25 @@ static int __init mpc834x_rtc_hookup(void) late_initcall(mpc834x_rtc_hookup); #endif -void __init platform_init(void) +/* + * Called very early, MMU is off, device-tree isn't unflattened + */ +static int __init mpc834x_sys_probe(void) { - /* setup the PowerPC module struct */ - ppc_md.setup_arch = mpc834x_sys_setup_arch; - - ppc_md.init_IRQ = mpc834x_sys_init_IRQ; - ppc_md.get_irq = ipic_get_irq; - - ppc_md.restart = mpc83xx_restart; - - ppc_md.time_init = mpc83xx_time_init; - ppc_md.set_rtc_time = NULL; - ppc_md.get_rtc_time = NULL; - ppc_md.calibrate_decr = generic_calibrate_decr; - - ppc_md.progress = udbg_progress; - - if (ppc_md.progress) - ppc_md.progress("mpc834x_sys_init(): exit", 0); - - return; + /* We always match for now, eventually we should look at the flat + dev tree to ensure this is the board we are suppose to run on + */ + return 1; } + +define_machine(mpc834x_sys) { + .name = "MPC834x SYS", + .probe = mpc834x_sys_probe, + .setup_arch = mpc834x_sys_setup_arch, + .init_IRQ = mpc834x_sys_init_IRQ, + .get_irq = ipic_get_irq, + .restart = mpc83xx_restart, + .time_init = mpc83xx_time_init, + .calibrate_decr = generic_calibrate_decr, + .progress = udbg_progress, +}; diff --git a/trunk/arch/powerpc/platforms/85xx/mpc85xx_ads.c b/trunk/arch/powerpc/platforms/85xx/mpc85xx_ads.c index b7821dbae00d..5eeff370f5fc 100644 --- a/trunk/arch/powerpc/platforms/85xx/mpc85xx_ads.c +++ b/trunk/arch/powerpc/platforms/85xx/mpc85xx_ads.c @@ -220,25 +220,25 @@ void mpc85xx_ads_show_cpuinfo(struct seq_file *m) seq_printf(m, "Memory\t\t: %d MB\n", memsize / (1024 * 1024)); } -void __init platform_init(void) +/* + * Called very early, device-tree isn't unflattened + */ +static int __init mpc85xx_ads_probe(void) { - ppc_md.setup_arch = mpc85xx_ads_setup_arch; - ppc_md.show_cpuinfo = mpc85xx_ads_show_cpuinfo; - - ppc_md.init_IRQ = mpc85xx_ads_pic_init; - ppc_md.get_irq = mpic_get_irq; - - ppc_md.restart = mpc85xx_restart; - ppc_md.power_off = NULL; - ppc_md.halt = NULL; - - ppc_md.time_init = NULL; - ppc_md.set_rtc_time = NULL; - ppc_md.get_rtc_time = NULL; - ppc_md.calibrate_decr = generic_calibrate_decr; - - ppc_md.progress = udbg_progress; - - if (ppc_md.progress) - ppc_md.progress("mpc85xx_ads platform_init(): exit", 0); + /* We always match for now, eventually we should look at the flat + dev tree to ensure this is the board we are suppose to run on + */ + return 1; } + +define_machine(mpc85xx_ads) { + .name = "MPC85xx ADS", + .probe = mpc85xx_ads_probe, + .setup_arch = mpc85xx_ads_setup_arch, + .init_IRQ = mpc85xx_ads_pic_init, + .show_cpuinfo = mpc85xx_ads_show_cpuinfo, + .get_irq = mpic_get_irq, + .restart = mpc85xx_restart, + .calibrate_decr = generic_calibrate_decr, + .progress = udbg_progress, +}; diff --git a/trunk/arch/powerpc/platforms/cell/spu_callbacks.c b/trunk/arch/powerpc/platforms/cell/spu_callbacks.c index 3a4245c926ad..6594bec73882 100644 --- a/trunk/arch/powerpc/platforms/cell/spu_callbacks.c +++ b/trunk/arch/powerpc/platforms/cell/spu_callbacks.c @@ -316,6 +316,7 @@ void *spu_syscall_table[] = { [__NR_pselect6] sys_ni_syscall, /* sys_pselect */ [__NR_ppoll] sys_ni_syscall, /* sys_ppoll */ [__NR_unshare] sys_unshare, + [__NR_splice] sys_splice, }; long spu_sys_callback(struct spu_syscall_block *s) diff --git a/trunk/arch/powerpc/platforms/cell/spufs/run.c b/trunk/arch/powerpc/platforms/cell/spufs/run.c index c04e078c0fe5..483c8b76232c 100644 --- a/trunk/arch/powerpc/platforms/cell/spufs/run.c +++ b/trunk/arch/powerpc/platforms/cell/spufs/run.c @@ -2,6 +2,7 @@ #include #include +#include #include "spufs.h" diff --git a/trunk/arch/powerpc/platforms/pseries/eeh.c b/trunk/arch/powerpc/platforms/pseries/eeh.c index 9b2b1cb117b3..780fb27a0099 100644 --- a/trunk/arch/powerpc/platforms/pseries/eeh.c +++ b/trunk/arch/powerpc/platforms/pseries/eeh.c @@ -865,7 +865,7 @@ void __init eeh_init(void) * on the CEC architecture, type of the device, on earlier boot * command-line arguments & etc. */ -void eeh_add_device_early(struct device_node *dn) +static void eeh_add_device_early(struct device_node *dn) { struct pci_controller *phb; struct eeh_early_enable_info info; @@ -882,7 +882,6 @@ void eeh_add_device_early(struct device_node *dn) info.buid_lo = BUID_LO(phb->buid); early_enable_eeh(dn, &info); } -EXPORT_SYMBOL_GPL(eeh_add_device_early); void eeh_add_device_tree_early(struct device_node *dn) { @@ -893,20 +892,6 @@ void eeh_add_device_tree_early(struct device_node *dn) } EXPORT_SYMBOL_GPL(eeh_add_device_tree_early); -void eeh_add_device_tree_late(struct pci_bus *bus) -{ - struct pci_dev *dev; - - list_for_each_entry(dev, &bus->devices, bus_list) { - eeh_add_device_late(dev); - if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE) { - struct pci_bus *subbus = dev->subordinate; - if (subbus) - eeh_add_device_tree_late(subbus); - } - } -} - /** * eeh_add_device_late - perform EEH initialization for the indicated pci device * @dev: pci device for which to set up EEH @@ -914,7 +899,7 @@ void eeh_add_device_tree_late(struct pci_bus *bus) * This routine must be used to complete EEH initialization for PCI * devices that were added after system boot (e.g. hotplug, dlpar). */ -void eeh_add_device_late(struct pci_dev *dev) +static void eeh_add_device_late(struct pci_dev *dev) { struct device_node *dn; struct pci_dn *pdn; @@ -933,16 +918,33 @@ void eeh_add_device_late(struct pci_dev *dev) pci_addr_cache_insert_device (dev); } -EXPORT_SYMBOL_GPL(eeh_add_device_late); + +void eeh_add_device_tree_late(struct pci_bus *bus) +{ + struct pci_dev *dev; + + list_for_each_entry(dev, &bus->devices, bus_list) { + eeh_add_device_late(dev); + if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE) { + struct pci_bus *subbus = dev->subordinate; + if (subbus) + eeh_add_device_tree_late(subbus); + } + } +} +EXPORT_SYMBOL_GPL(eeh_add_device_tree_late); /** * eeh_remove_device - undo EEH setup for the indicated pci device * @dev: pci device to be removed * - * This routine should be when a device is removed from a running - * system (e.g. by hotplug or dlpar). + * This routine should be called when a device is removed from + * a running system (e.g. by hotplug or dlpar). It unregisters + * the PCI device from the EEH subsystem. I/O errors affecting + * this device will no longer be detected after this call; thus, + * i/o errors affecting this slot may leave this device unusable. */ -void eeh_remove_device(struct pci_dev *dev) +static void eeh_remove_device(struct pci_dev *dev) { struct device_node *dn; if (!dev || !eeh_subsystem_enabled) @@ -958,21 +960,17 @@ void eeh_remove_device(struct pci_dev *dev) PCI_DN(dn)->pcidev = NULL; pci_dev_put (dev); } -EXPORT_SYMBOL_GPL(eeh_remove_device); void eeh_remove_bus_device(struct pci_dev *dev) { + struct pci_bus *bus = dev->subordinate; + struct pci_dev *child, *tmp; + eeh_remove_device(dev); - if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE) { - struct pci_bus *bus = dev->subordinate; - struct list_head *ln; - if (!bus) - return; - for (ln = bus->devices.next; ln != &bus->devices; ln = ln->next) { - struct pci_dev *pdev = pci_dev_b(ln); - if (pdev) - eeh_remove_bus_device(pdev); - } + + if (bus && dev->hdr_type == PCI_HEADER_TYPE_BRIDGE) { + list_for_each_entry_safe(child, tmp, &bus->devices, bus_list) + eeh_remove_bus_device(child); } } EXPORT_SYMBOL_GPL(eeh_remove_bus_device); diff --git a/trunk/arch/powerpc/platforms/pseries/eeh_driver.c b/trunk/arch/powerpc/platforms/pseries/eeh_driver.c index cc2495a0cdd5..1fba695e32e8 100644 --- a/trunk/arch/powerpc/platforms/pseries/eeh_driver.c +++ b/trunk/arch/powerpc/platforms/pseries/eeh_driver.c @@ -293,15 +293,16 @@ void handle_eeh_events (struct eeh_event *event) frozen_pdn = PCI_DN(frozen_dn); frozen_pdn->eeh_freeze_count++; - pci_str = pci_name (frozen_pdn->pcidev); - drv_str = pcid_name (frozen_pdn->pcidev); - if (!pci_str) { + if (frozen_pdn->pcidev) { + pci_str = pci_name (frozen_pdn->pcidev); + drv_str = pcid_name (frozen_pdn->pcidev); + } else { pci_str = pci_name (event->dev); drv_str = pcid_name (event->dev); } if (frozen_pdn->eeh_freeze_count > EEH_MAX_ALLOWED_FREEZES) - goto hard_fail; + goto excess_failures; /* If the reset state is a '5' and the time to reset is 0 (infinity) * or is more then 15 seconds, then mark this as a permanent failure. @@ -356,7 +357,7 @@ void handle_eeh_events (struct eeh_event *event) return; -hard_fail: +excess_failures: /* * About 90% of all real-life EEH failures in the field * are due to poorly seated PCI cards. Only 10% or so are @@ -367,7 +368,15 @@ void handle_eeh_events (struct eeh_event *event) "and has been permanently disabled. Please try reseating\n" "this device or replacing it.\n", drv_str, pci_str, frozen_pdn->eeh_freeze_count); + goto perm_error; + +hard_fail: + printk(KERN_ERR + "EEH: Unable to recover from failure of PCI device %s - %s\n" + "Please try reseating this device or replacing it.\n", + drv_str, pci_str); +perm_error: eeh_slot_error_detail(frozen_pdn, 2 /* Permanent Error */); /* Notify all devices that they're about to go down. */ diff --git a/trunk/arch/powerpc/platforms/pseries/eeh_event.c b/trunk/arch/powerpc/platforms/pseries/eeh_event.c index 9a9961f27480..a1bda6f96fd1 100644 --- a/trunk/arch/powerpc/platforms/pseries/eeh_event.c +++ b/trunk/arch/powerpc/platforms/pseries/eeh_event.c @@ -19,7 +19,9 @@ */ #include +#include #include +#include #include #include @@ -37,14 +39,18 @@ LIST_HEAD(eeh_eventlist); static void eeh_thread_launcher(void *); DECLARE_WORK(eeh_event_wq, eeh_thread_launcher, NULL); +/* Serialize reset sequences for a given pci device */ +DEFINE_MUTEX(eeh_event_mutex); + /** - * eeh_event_handler - dispatch EEH events. The detection of a frozen - * slot can occur inside an interrupt, where it can be hard to do - * anything about it. The goal of this routine is to pull these - * detection events out of the context of the interrupt handler, and - * re-dispatch them for processing at a later time in a normal context. - * + * eeh_event_handler - dispatch EEH events. * @dummy - unused + * + * The detection of a frozen slot can occur inside an interrupt, + * where it can be hard to do anything about it. The goal of this + * routine is to pull these detection events out of the context + * of the interrupt handler, and re-dispatch them for processing + * at a later time in a normal context. */ static int eeh_event_handler(void * dummy) { @@ -64,23 +70,24 @@ static int eeh_event_handler(void * dummy) event = list_entry(eeh_eventlist.next, struct eeh_event, list); list_del(&event->list); } - - if (event) - eeh_mark_slot(event->dn, EEH_MODE_RECOVERING); - spin_unlock_irqrestore(&eeh_eventlist_lock, flags); + if (event == NULL) break; + /* Serialize processing of EEH events */ + mutex_lock(&eeh_event_mutex); + eeh_mark_slot(event->dn, EEH_MODE_RECOVERING); + printk(KERN_INFO "EEH: Detected PCI bus error on device %s\n", pci_name(event->dev)); handle_eeh_events(event); eeh_clear_slot(event->dn, EEH_MODE_RECOVERING); - pci_dev_put(event->dev); kfree(event); + mutex_unlock(&eeh_event_mutex); } return 0; @@ -88,7 +95,6 @@ static int eeh_event_handler(void * dummy) /** * eeh_thread_launcher - * * @dummy - unused */ static void eeh_thread_launcher(void *dummy) diff --git a/trunk/arch/powerpc/platforms/pseries/hvCall.S b/trunk/arch/powerpc/platforms/pseries/hvCall.S index db7c19fe9297..c9ff547f9d25 100644 --- a/trunk/arch/powerpc/platforms/pseries/hvCall.S +++ b/trunk/arch/powerpc/platforms/pseries/hvCall.S @@ -127,3 +127,103 @@ _GLOBAL(plpar_hcall_4out) mtcrf 0xff,r0 blr /* return r3 = status */ + +/* plpar_hcall_7arg_7ret(unsigned long opcode, R3 + unsigned long arg1, R4 + unsigned long arg2, R5 + unsigned long arg3, R6 + unsigned long arg4, R7 + unsigned long arg5, R8 + unsigned long arg6, R9 + unsigned long arg7, R10 + unsigned long *out1, 112(R1) + unsigned long *out2, 110(R1) + unsigned long *out3, 108(R1) + unsigned long *out4, 106(R1) + unsigned long *out5, 104(R1) + unsigned long *out6, 102(R1) + unsigned long *out7); 100(R1) +*/ +_GLOBAL(plpar_hcall_7arg_7ret) + HMT_MEDIUM + + mfcr r0 + stw r0,8(r1) + + HVSC /* invoke the hypervisor */ + + lwz r0,8(r1) + + ld r11,STK_PARM(r11)(r1) /* Fetch r4 ret arg */ + std r4,0(r11) + ld r11,STK_PARM(r12)(r1) /* Fetch r5 ret arg */ + std r5,0(r11) + ld r11,STK_PARM(r13)(r1) /* Fetch r6 ret arg */ + std r6,0(r11) + ld r11,STK_PARM(r14)(r1) /* Fetch r7 ret arg */ + std r7,0(r11) + ld r11,STK_PARM(r15)(r1) /* Fetch r8 ret arg */ + std r8,0(r11) + ld r11,STK_PARM(r16)(r1) /* Fetch r9 ret arg */ + std r9,0(r11) + ld r11,STK_PARM(r17)(r1) /* Fetch r10 ret arg */ + std r10,0(r11) + + mtcrf 0xff,r0 + + blr /* return r3 = status */ + +/* plpar_hcall_9arg_9ret(unsigned long opcode, R3 + unsigned long arg1, R4 + unsigned long arg2, R5 + unsigned long arg3, R6 + unsigned long arg4, R7 + unsigned long arg5, R8 + unsigned long arg6, R9 + unsigned long arg7, R10 + unsigned long arg8, 112(R1) + unsigned long arg9, 110(R1) + unsigned long *out1, 108(R1) + unsigned long *out2, 106(R1) + unsigned long *out3, 104(R1) + unsigned long *out4, 102(R1) + unsigned long *out5, 100(R1) + unsigned long *out6, 98(R1) + unsigned long *out7); 96(R1) + unsigned long *out8, 94(R1) + unsigned long *out9, 92(R1) +*/ +_GLOBAL(plpar_hcall_9arg_9ret) + HMT_MEDIUM + + mfcr r0 + stw r0,8(r1) + + ld r11,STK_PARM(r11)(r1) /* put arg8 in R11 */ + ld r12,STK_PARM(r12)(r1) /* put arg9 in R12 */ + + HVSC /* invoke the hypervisor */ + + ld r0,STK_PARM(r13)(r1) /* Fetch r4 ret arg */ + stdx r4,r0,r0 + ld r0,STK_PARM(r14)(r1) /* Fetch r5 ret arg */ + stdx r5,r0,r0 + ld r0,STK_PARM(r15)(r1) /* Fetch r6 ret arg */ + stdx r6,r0,r0 + ld r0,STK_PARM(r16)(r1) /* Fetch r7 ret arg */ + stdx r7,r0,r0 + ld r0,STK_PARM(r17)(r1) /* Fetch r8 ret arg */ + stdx r8,r0,r0 + ld r0,STK_PARM(r18)(r1) /* Fetch r9 ret arg */ + stdx r9,r0,r0 + ld r0,STK_PARM(r19)(r1) /* Fetch r10 ret arg */ + stdx r10,r0,r0 + ld r0,STK_PARM(r20)(r1) /* Fetch r11 ret arg */ + stdx r11,r0,r0 + ld r0,STK_PARM(r21)(r1) /* Fetch r12 ret arg */ + stdx r12,r0,r0 + + lwz r0,8(r1) + mtcrf 0xff,r0 + + blr /* return r3 = status */ diff --git a/trunk/arch/powerpc/platforms/pseries/hvconsole.c b/trunk/arch/powerpc/platforms/pseries/hvconsole.c index ba6befd96636..a72a987f1d4d 100644 --- a/trunk/arch/powerpc/platforms/pseries/hvconsole.c +++ b/trunk/arch/powerpc/platforms/pseries/hvconsole.c @@ -41,7 +41,7 @@ int hvc_get_chars(uint32_t vtermno, char *buf, int count) unsigned long got; if (plpar_hcall(H_GET_TERM_CHAR, vtermno, 0, 0, 0, &got, - (unsigned long *)buf, (unsigned long *)buf+1) == H_Success) + (unsigned long *)buf, (unsigned long *)buf+1) == H_SUCCESS) return got; return 0; } @@ -69,9 +69,9 @@ int hvc_put_chars(uint32_t vtermno, const char *buf, int count) ret = plpar_hcall_norets(H_PUT_TERM_CHAR, vtermno, count, lbuf[0], lbuf[1]); - if (ret == H_Success) + if (ret == H_SUCCESS) return count; - if (ret == H_Busy) + if (ret == H_BUSY) return 0; return -EIO; } diff --git a/trunk/arch/powerpc/platforms/pseries/hvcserver.c b/trunk/arch/powerpc/platforms/pseries/hvcserver.c index 22bfb5c89db9..fcf4b4cbeaf3 100644 --- a/trunk/arch/powerpc/platforms/pseries/hvcserver.c +++ b/trunk/arch/powerpc/platforms/pseries/hvcserver.c @@ -43,21 +43,21 @@ MODULE_VERSION(HVCS_ARCH_VERSION); static int hvcs_convert(long to_convert) { switch (to_convert) { - case H_Success: + case H_SUCCESS: return 0; - case H_Parameter: + case H_PARAMETER: return -EINVAL; - case H_Hardware: + case H_HARDWARE: return -EIO; - case H_Busy: - case H_LongBusyOrder1msec: - case H_LongBusyOrder10msec: - case H_LongBusyOrder100msec: - case H_LongBusyOrder1sec: - case H_LongBusyOrder10sec: - case H_LongBusyOrder100sec: + case H_BUSY: + case H_LONG_BUSY_ORDER_1_MSEC: + case H_LONG_BUSY_ORDER_10_MSEC: + case H_LONG_BUSY_ORDER_100_MSEC: + case H_LONG_BUSY_ORDER_1_SEC: + case H_LONG_BUSY_ORDER_10_SEC: + case H_LONG_BUSY_ORDER_100_SEC: return -EBUSY; - case H_Function: /* fall through */ + case H_FUNCTION: /* fall through */ default: return -EPERM; } diff --git a/trunk/arch/powerpc/platforms/pseries/lpar.c b/trunk/arch/powerpc/platforms/pseries/lpar.c index 8952528d31ac..634b7d06d3cc 100644 --- a/trunk/arch/powerpc/platforms/pseries/lpar.c +++ b/trunk/arch/powerpc/platforms/pseries/lpar.c @@ -54,7 +54,8 @@ EXPORT_SYMBOL(plpar_hcall); EXPORT_SYMBOL(plpar_hcall_4out); EXPORT_SYMBOL(plpar_hcall_norets); EXPORT_SYMBOL(plpar_hcall_8arg_2ret); - +EXPORT_SYMBOL(plpar_hcall_7arg_7ret); +EXPORT_SYMBOL(plpar_hcall_9arg_9ret); extern void pSeries_find_serial_port(void); @@ -72,7 +73,7 @@ static void udbg_hvsi_putc(char c) do { rc = plpar_put_term_char(vtermno, sizeof(packet), packet); - } while (rc == H_Busy); + } while (rc == H_BUSY); } static long hvsi_udbg_buf_len; @@ -85,7 +86,7 @@ static int udbg_hvsi_getc_poll(void) if (hvsi_udbg_buf_len == 0) { rc = plpar_get_term_char(vtermno, &hvsi_udbg_buf_len, hvsi_udbg_buf); - if (rc != H_Success || hvsi_udbg_buf[0] != 0xff) { + if (rc != H_SUCCESS || hvsi_udbg_buf[0] != 0xff) { /* bad read or non-data packet */ hvsi_udbg_buf_len = 0; } else { @@ -139,7 +140,7 @@ static void udbg_putcLP(char c) buf[0] = c; do { rc = plpar_put_term_char(vtermno, 1, buf); - } while(rc == H_Busy); + } while(rc == H_BUSY); } /* Buffered chars getc */ @@ -158,7 +159,7 @@ static int udbg_getc_pollLP(void) /* get some more chars. */ inbuflen = 0; rc = plpar_get_term_char(vtermno, &inbuflen, buf); - if (rc != H_Success) + if (rc != H_SUCCESS) inbuflen = 0; /* otherwise inbuflen is garbage */ } if (inbuflen <= 0 || inbuflen > 16) { @@ -304,7 +305,7 @@ long pSeries_lpar_hpte_insert(unsigned long hpte_group, lpar_rc = plpar_hcall(H_ENTER, flags, hpte_group, hpte_v, hpte_r, &slot, &dummy0, &dummy1); - if (unlikely(lpar_rc == H_PTEG_Full)) { + if (unlikely(lpar_rc == H_PTEG_FULL)) { if (!(vflags & HPTE_V_BOLTED)) DBG_LOW(" full\n"); return -1; @@ -315,7 +316,7 @@ long pSeries_lpar_hpte_insert(unsigned long hpte_group, * will fail. However we must catch the failure in hash_page * or we will loop forever, so return -2 in this case. */ - if (unlikely(lpar_rc != H_Success)) { + if (unlikely(lpar_rc != H_SUCCESS)) { if (!(vflags & HPTE_V_BOLTED)) DBG_LOW(" lpar err %d\n", lpar_rc); return -2; @@ -346,9 +347,9 @@ static long pSeries_lpar_hpte_remove(unsigned long hpte_group) /* don't remove a bolted entry */ lpar_rc = plpar_pte_remove(H_ANDCOND, hpte_group + slot_offset, (0x1UL << 4), &dummy1, &dummy2); - if (lpar_rc == H_Success) + if (lpar_rc == H_SUCCESS) return i; - BUG_ON(lpar_rc != H_Not_Found); + BUG_ON(lpar_rc != H_NOT_FOUND); slot_offset++; slot_offset &= 0x7; @@ -391,14 +392,14 @@ static long pSeries_lpar_hpte_updatepp(unsigned long slot, lpar_rc = plpar_pte_protect(flags, slot, want_v & HPTE_V_AVPN); - if (lpar_rc == H_Not_Found) { + if (lpar_rc == H_NOT_FOUND) { DBG_LOW("not found !\n"); return -1; } DBG_LOW("ok\n"); - BUG_ON(lpar_rc != H_Success); + BUG_ON(lpar_rc != H_SUCCESS); return 0; } @@ -417,7 +418,7 @@ static unsigned long pSeries_lpar_hpte_getword0(unsigned long slot) lpar_rc = plpar_pte_read(flags, slot, &dword0, &dummy_word1); - BUG_ON(lpar_rc != H_Success); + BUG_ON(lpar_rc != H_SUCCESS); return dword0; } @@ -468,7 +469,7 @@ static void pSeries_lpar_hpte_updateboltedpp(unsigned long newpp, flags = newpp & 7; lpar_rc = plpar_pte_protect(flags, slot, 0); - BUG_ON(lpar_rc != H_Success); + BUG_ON(lpar_rc != H_SUCCESS); } static void pSeries_lpar_hpte_invalidate(unsigned long slot, unsigned long va, @@ -484,10 +485,10 @@ static void pSeries_lpar_hpte_invalidate(unsigned long slot, unsigned long va, want_v = hpte_encode_v(va, psize); lpar_rc = plpar_pte_remove(H_AVPN, slot, want_v & HPTE_V_AVPN, &dummy1, &dummy2); - if (lpar_rc == H_Not_Found) + if (lpar_rc == H_NOT_FOUND) return; - BUG_ON(lpar_rc != H_Success); + BUG_ON(lpar_rc != H_SUCCESS); } /* diff --git a/trunk/arch/powerpc/platforms/pseries/setup.c b/trunk/arch/powerpc/platforms/pseries/setup.c index b2fbf8ba8fbb..5eb55ef1c91c 100644 --- a/trunk/arch/powerpc/platforms/pseries/setup.c +++ b/trunk/arch/powerpc/platforms/pseries/setup.c @@ -463,7 +463,7 @@ static void pseries_dedicated_idle_sleep(void) * very low priority. The cede enables interrupts, which * doesn't matter here. */ - if (!lppaca[cpu ^ 1].idle || poll_pending() == H_Pending) + if (!lppaca[cpu ^ 1].idle || poll_pending() == H_PENDING) cede_processor(); out: diff --git a/trunk/arch/powerpc/platforms/pseries/vio.c b/trunk/arch/powerpc/platforms/pseries/vio.c index 866379b80c09..8e53e04ada8b 100644 --- a/trunk/arch/powerpc/platforms/pseries/vio.c +++ b/trunk/arch/powerpc/platforms/pseries/vio.c @@ -258,7 +258,7 @@ EXPORT_SYMBOL(vio_find_node); int vio_enable_interrupts(struct vio_dev *dev) { int rc = h_vio_signal(dev->unit_address, VIO_IRQ_ENABLE); - if (rc != H_Success) + if (rc != H_SUCCESS) printk(KERN_ERR "vio: Error 0x%x enabling interrupts\n", rc); return rc; } @@ -267,7 +267,7 @@ EXPORT_SYMBOL(vio_enable_interrupts); int vio_disable_interrupts(struct vio_dev *dev) { int rc = h_vio_signal(dev->unit_address, VIO_IRQ_DISABLE); - if (rc != H_Success) + if (rc != H_SUCCESS) printk(KERN_ERR "vio: Error 0x%x disabling interrupts\n", rc); return rc; } diff --git a/trunk/arch/powerpc/platforms/pseries/xics.c b/trunk/arch/powerpc/platforms/pseries/xics.c index 4864cb32be25..2d60ea30fed6 100644 --- a/trunk/arch/powerpc/platforms/pseries/xics.c +++ b/trunk/arch/powerpc/platforms/pseries/xics.c @@ -168,7 +168,7 @@ static int pSeriesLP_xirr_info_get(int n_cpu) unsigned long return_value; lpar_rc = plpar_xirr(&return_value); - if (lpar_rc != H_Success) + if (lpar_rc != H_SUCCESS) panic(" bad return code xirr - rc = %lx \n", lpar_rc); return (int)return_value; } @@ -179,7 +179,7 @@ static void pSeriesLP_xirr_info_set(int n_cpu, int value) unsigned long val64 = value & 0xffffffff; lpar_rc = plpar_eoi(val64); - if (lpar_rc != H_Success) + if (lpar_rc != H_SUCCESS) panic("bad return code EOI - rc = %ld, value=%lx\n", lpar_rc, val64); } @@ -189,7 +189,7 @@ void pSeriesLP_cppr_info(int n_cpu, u8 value) unsigned long lpar_rc; lpar_rc = plpar_cppr(value); - if (lpar_rc != H_Success) + if (lpar_rc != H_SUCCESS) panic("bad return code cppr - rc = %lx\n", lpar_rc); } @@ -198,7 +198,7 @@ static void pSeriesLP_qirr_info(int n_cpu , u8 value) unsigned long lpar_rc; lpar_rc = plpar_ipi(get_hard_smp_processor_id(n_cpu), value); - if (lpar_rc != H_Success) + if (lpar_rc != H_SUCCESS) panic("bad return code qirr - rc = %lx\n", lpar_rc); } diff --git a/trunk/arch/s390/kernel/smp.c b/trunk/arch/s390/kernel/smp.c index 2b8841f85534..343120c9223d 100644 --- a/trunk/arch/s390/kernel/smp.c +++ b/trunk/arch/s390/kernel/smp.c @@ -801,7 +801,7 @@ void __init smp_prepare_cpus(unsigned int max_cpus) */ print_cpu_info(&S390_lowcore.cpu_data); - for_each_cpu(i) { + for_each_possible_cpu(i) { lowcore_ptr[i] = (struct _lowcore *) __get_free_pages(GFP_KERNEL|GFP_DMA, sizeof(void*) == 8 ? 1 : 0); @@ -831,7 +831,7 @@ void __init smp_prepare_cpus(unsigned int max_cpus) #endif set_prefix((u32)(unsigned long) lowcore_ptr[smp_processor_id()]); - for_each_cpu(cpu) + for_each_possible_cpu(cpu) if (cpu != smp_processor_id()) smp_create_idle(cpu); } @@ -868,7 +868,7 @@ static int __init topology_init(void) int cpu; int ret; - for_each_cpu(cpu) { + for_each_possible_cpu(cpu) { ret = register_cpu(&per_cpu(cpu_devices, cpu), cpu, NULL); if (ret) printk(KERN_WARNING "topology_init: register_cpu %d " diff --git a/trunk/arch/sh/kernel/cpu/init.c b/trunk/arch/sh/kernel/cpu/init.c index cf94e8ef17c5..868e68b28880 100644 --- a/trunk/arch/sh/kernel/cpu/init.c +++ b/trunk/arch/sh/kernel/cpu/init.c @@ -30,7 +30,7 @@ static int x##_disabled __initdata = 0; \ static int __init x##_setup(char *opts) \ { \ x##_disabled = 1; \ - return 0; \ + return 1; \ } \ __setup("no" __stringify(x), x##_setup); diff --git a/trunk/arch/sh/kernel/setup.c b/trunk/arch/sh/kernel/setup.c index 7ee4ca203616..bb229ef030f3 100644 --- a/trunk/arch/sh/kernel/setup.c +++ b/trunk/arch/sh/kernel/setup.c @@ -401,7 +401,7 @@ static int __init topology_init(void) { int cpu_id; - for_each_cpu(cpu_id) + for_each_possible_cpu(cpu_id) register_cpu(&cpu[cpu_id], cpu_id, NULL); return 0; diff --git a/trunk/arch/sparc/kernel/systbls.S b/trunk/arch/sparc/kernel/systbls.S index 768de64b371f..fbbec5e761c6 100644 --- a/trunk/arch/sparc/kernel/systbls.S +++ b/trunk/arch/sparc/kernel/systbls.S @@ -64,13 +64,13 @@ sys_call_table: /*215*/ .long sys_ipc, sys_sigreturn, sys_clone, sys_ioprio_get, sys_adjtimex /*220*/ .long sys_sigprocmask, sys_ni_syscall, sys_delete_module, sys_ni_syscall, sys_getpgid /*225*/ .long sys_bdflush, sys_sysfs, sys_nis_syscall, sys_setfsuid16, sys_setfsgid16 -/*230*/ .long sys_select, sys_time, sys_nis_syscall, sys_stime, sys_statfs64 +/*230*/ .long sys_select, sys_time, sys_splice, sys_stime, sys_statfs64 /* "We are the Knights of the Forest of Ni!!" */ /*235*/ .long sys_fstatfs64, sys_llseek, sys_mlock, sys_munlock, sys_mlockall /*240*/ .long sys_munlockall, sys_sched_setparam, sys_sched_getparam, sys_sched_setscheduler, sys_sched_getscheduler /*245*/ .long sys_sched_yield, sys_sched_get_priority_max, sys_sched_get_priority_min, sys_sched_rr_get_interval, sys_nanosleep /*250*/ .long sparc_mremap, sys_sysctl, sys_getsid, sys_fdatasync, sys_nfsservctl -/*255*/ .long sys_nis_syscall, sys_clock_settime, sys_clock_gettime, sys_clock_getres, sys_clock_nanosleep +/*255*/ .long sys_sync_file_range, sys_clock_settime, sys_clock_gettime, sys_clock_getres, sys_clock_nanosleep /*260*/ .long sys_sched_getaffinity, sys_sched_setaffinity, sys_timer_settime, sys_timer_gettime, sys_timer_getoverrun /*265*/ .long sys_timer_delete, sys_timer_create, sys_nis_syscall, sys_io_setup, sys_io_destroy /*270*/ .long sys_io_submit, sys_io_cancel, sys_io_getevents, sys_mq_open, sys_mq_unlink diff --git a/trunk/arch/sparc64/defconfig b/trunk/arch/sparc64/defconfig index 900fb0b940d8..30389085a359 100644 --- a/trunk/arch/sparc64/defconfig +++ b/trunk/arch/sparc64/defconfig @@ -1,7 +1,7 @@ # # Automatically generated make config: don't edit # Linux kernel version: 2.6.16 -# Sun Mar 26 14:58:11 2006 +# Fri Mar 31 01:40:57 2006 # CONFIG_SPARC=y CONFIG_SPARC64=y @@ -180,6 +180,7 @@ CONFIG_SYN_COOKIES=y CONFIG_INET_AH=y CONFIG_INET_ESP=y CONFIG_INET_IPCOMP=y +CONFIG_INET_XFRM_TUNNEL=y CONFIG_INET_TUNNEL=y CONFIG_INET_DIAG=y CONFIG_INET_TCP_DIAG=y @@ -203,6 +204,7 @@ CONFIG_IPV6_ROUTE_INFO=y CONFIG_INET6_AH=m CONFIG_INET6_ESP=m CONFIG_INET6_IPCOMP=m +CONFIG_INET6_XFRM_TUNNEL=m CONFIG_INET6_TUNNEL=m CONFIG_IPV6_TUNNEL=m # CONFIG_NETFILTER is not set @@ -308,7 +310,6 @@ CONFIG_BLK_DEV_NBD=m # CONFIG_BLK_DEV_SX8 is not set CONFIG_BLK_DEV_UB=m # CONFIG_BLK_DEV_RAM is not set -CONFIG_BLK_DEV_RAM_COUNT=16 # CONFIG_BLK_DEV_INITRD is not set CONFIG_CDROM_PKTCDVD=m CONFIG_CDROM_PKTCDVD_BUFFERS=8 @@ -449,6 +450,7 @@ CONFIG_MD_RAID0=m CONFIG_MD_RAID1=m CONFIG_MD_RAID10=m CONFIG_MD_RAID5=m +# CONFIG_MD_RAID5_RESHAPE is not set CONFIG_MD_RAID6=m CONFIG_MD_MULTIPATH=m # CONFIG_MD_FAULTY is not set @@ -741,9 +743,7 @@ CONFIG_I2C_ALGOBIT=y # CONFIG_SENSORS_PCF8574 is not set # CONFIG_SENSORS_PCA9539 is not set # CONFIG_SENSORS_PCF8591 is not set -# CONFIG_SENSORS_RTC8564 is not set # CONFIG_SENSORS_MAX6875 is not set -# CONFIG_RTC_X1205_I2C is not set # CONFIG_I2C_DEBUG_CORE is not set # CONFIG_I2C_DEBUG_ALGO is not set # CONFIG_I2C_DEBUG_BUS is not set @@ -826,6 +826,7 @@ CONFIG_FB_CFB_FILLRECT=y CONFIG_FB_CFB_COPYAREA=y CONFIG_FB_CFB_IMAGEBLIT=y # CONFIG_FB_MACMODES is not set +# CONFIG_FB_FIRMWARE_EDID is not set CONFIG_FB_MODE_HELPERS=y CONFIG_FB_TILEBLITTING=y # CONFIG_FB_CIRRUS is not set @@ -1116,6 +1117,11 @@ CONFIG_USB_HIDDEV=y # EDAC - error detection and reporting (RAS) (EXPERIMENTAL) # +# +# Real Time Clock +# +# CONFIG_RTC_CLASS is not set + # # Misc Linux/SPARC drivers # diff --git a/trunk/arch/sparc64/kernel/smp.c b/trunk/arch/sparc64/kernel/smp.c index 7dc28a484268..8175a6968c6b 100644 --- a/trunk/arch/sparc64/kernel/smp.c +++ b/trunk/arch/sparc64/kernel/smp.c @@ -830,9 +830,16 @@ void smp_call_function_client(int irq, struct pt_regs *regs) static void tsb_sync(void *info) { + struct trap_per_cpu *tp = &trap_block[raw_smp_processor_id()]; struct mm_struct *mm = info; - if (current->active_mm == mm) + /* It is not valid to test "currrent->active_mm == mm" here. + * + * The value of "current" is not changed atomically with + * switch_mm(). But that's OK, we just need to check the + * current cpu's trap block PGD physical address. + */ + if (tp->pgd_paddr == __pa(mm->pgd)) tsb_context_switch(mm); } diff --git a/trunk/arch/sparc64/kernel/sys32.S b/trunk/arch/sparc64/kernel/sys32.S index c4a1cef4b1e5..86dd5cb81e09 100644 --- a/trunk/arch/sparc64/kernel/sys32.S +++ b/trunk/arch/sparc64/kernel/sys32.S @@ -136,6 +136,8 @@ SIGN1(sys32_getpeername, sys_getpeername, %o0) SIGN1(sys32_getsockname, sys_getsockname, %o0) SIGN2(sys32_ioprio_get, sys_ioprio_get, %o0, %o1) SIGN3(sys32_ioprio_set, sys_ioprio_set, %o0, %o1, %o2) +SIGN2(sys32_splice, sys_splice, %o0, %o1) +SIGN2(sys32_sync_file_range, compat_sync_file_range, %o0, %o5) .globl sys32_mmap2 sys32_mmap2: diff --git a/trunk/arch/sparc64/kernel/sys_sparc32.c b/trunk/arch/sparc64/kernel/sys_sparc32.c index 2e906bad56fa..31030bf00f1a 100644 --- a/trunk/arch/sparc64/kernel/sys_sparc32.c +++ b/trunk/arch/sparc64/kernel/sys_sparc32.c @@ -1069,3 +1069,11 @@ long sys32_lookup_dcookie(unsigned long cookie_high, return sys_lookup_dcookie((cookie_high << 32) | cookie_low, buf, len); } + +long compat_sync_file_range(int fd, unsigned long off_high, unsigned long off_low, unsigned long nb_high, unsigned long nb_low, int flags) +{ + return sys_sync_file_range(fd, + (off_high << 32) | off_low, + (nb_high << 32) | nb_low, + flags); +} diff --git a/trunk/arch/sparc64/kernel/systbls.S b/trunk/arch/sparc64/kernel/systbls.S index 3b250f2318fd..857b82c82875 100644 --- a/trunk/arch/sparc64/kernel/systbls.S +++ b/trunk/arch/sparc64/kernel/systbls.S @@ -66,12 +66,12 @@ sys_call_table32: .word sys32_ipc, sys32_sigreturn, sys_clone, sys32_ioprio_get, compat_sys_adjtimex /*220*/ .word sys32_sigprocmask, sys_ni_syscall, sys32_delete_module, sys_ni_syscall, sys32_getpgid .word sys32_bdflush, sys32_sysfs, sys_nis_syscall, sys32_setfsuid16, sys32_setfsgid16 -/*230*/ .word sys32_select, compat_sys_time, sys_nis_syscall, compat_sys_stime, compat_sys_statfs64 +/*230*/ .word sys32_select, compat_sys_time, sys32_splice, compat_sys_stime, compat_sys_statfs64 .word compat_sys_fstatfs64, sys_llseek, sys_mlock, sys_munlock, sys32_mlockall /*240*/ .word sys_munlockall, sys32_sched_setparam, sys32_sched_getparam, sys32_sched_setscheduler, sys32_sched_getscheduler .word sys_sched_yield, sys32_sched_get_priority_max, sys32_sched_get_priority_min, sys32_sched_rr_get_interval, compat_sys_nanosleep /*250*/ .word sys32_mremap, sys32_sysctl, sys32_getsid, sys_fdatasync, sys32_nfsservctl - .word sys_ni_syscall, compat_sys_clock_settime, compat_sys_clock_gettime, compat_sys_clock_getres, sys32_clock_nanosleep + .word sys32_sync_file_range, compat_sys_clock_settime, compat_sys_clock_gettime, compat_sys_clock_getres, sys32_clock_nanosleep /*260*/ .word compat_sys_sched_getaffinity, compat_sys_sched_setaffinity, sys32_timer_settime, compat_sys_timer_gettime, sys_timer_getoverrun .word sys_timer_delete, compat_sys_timer_create, sys_ni_syscall, compat_sys_io_setup, sys_io_destroy /*270*/ .word sys32_io_submit, sys_io_cancel, compat_sys_io_getevents, sys32_mq_open, sys_mq_unlink @@ -135,12 +135,12 @@ sys_call_table: .word sys_ipc, sys_nis_syscall, sys_clone, sys_ioprio_get, sys_adjtimex /*220*/ .word sys_nis_syscall, sys_ni_syscall, sys_delete_module, sys_ni_syscall, sys_getpgid .word sys_bdflush, sys_sysfs, sys_nis_syscall, sys_setfsuid, sys_setfsgid -/*230*/ .word sys_select, sys_nis_syscall, sys_nis_syscall, sys_stime, sys_statfs64 +/*230*/ .word sys_select, sys_nis_syscall, sys_splice, sys_stime, sys_statfs64 .word sys_fstatfs64, sys_llseek, sys_mlock, sys_munlock, sys_mlockall /*240*/ .word sys_munlockall, sys_sched_setparam, sys_sched_getparam, sys_sched_setscheduler, sys_sched_getscheduler .word sys_sched_yield, sys_sched_get_priority_max, sys_sched_get_priority_min, sys_sched_rr_get_interval, sys_nanosleep /*250*/ .word sys64_mremap, sys_sysctl, sys_getsid, sys_fdatasync, sys_nfsservctl - .word sys_ni_syscall, sys_clock_settime, sys_clock_gettime, sys_clock_getres, sys_clock_nanosleep + .word sys_sync_file_range, sys_clock_settime, sys_clock_gettime, sys_clock_getres, sys_clock_nanosleep /*260*/ .word sys_sched_getaffinity, sys_sched_setaffinity, sys_timer_settime, sys_timer_gettime, sys_timer_getoverrun .word sys_timer_delete, sys_timer_create, sys_ni_syscall, sys_io_setup, sys_io_destroy /*270*/ .word sys_io_submit, sys_io_cancel, sys_io_getevents, sys_mq_open, sys_mq_unlink diff --git a/trunk/arch/sparc64/mm/fault.c b/trunk/arch/sparc64/mm/fault.c index 0db2f7d9fab5..6e002aacb961 100644 --- a/trunk/arch/sparc64/mm/fault.c +++ b/trunk/arch/sparc64/mm/fault.c @@ -327,8 +327,12 @@ asmlinkage void __kprobes do_sparc64_fault(struct pt_regs *regs) insn = get_fault_insn(regs, 0); if (!insn) goto continue_fault; + /* All loads, stores and atomics have bits 30 and 31 both set + * in the instruction. Bit 21 is set in all stores, but we + * have to avoid prefetches which also have bit 21 set. + */ if ((insn & 0xc0200000) == 0xc0200000 && - (insn & 0x1780000) != 0x1680000) { + (insn & 0x01780000) != 0x01680000) { /* Don't bother updating thread struct value, * because update_mmu_cache only cares which tlb * the access came from. diff --git a/trunk/arch/sparc64/mm/hugetlbpage.c b/trunk/arch/sparc64/mm/hugetlbpage.c index 074620d413d4..fbbbebbad8a4 100644 --- a/trunk/arch/sparc64/mm/hugetlbpage.c +++ b/trunk/arch/sparc64/mm/hugetlbpage.c @@ -198,6 +198,13 @@ pte_t *huge_pte_alloc(struct mm_struct *mm, unsigned long addr) pmd_t *pmd; pte_t *pte = NULL; + /* We must align the address, because our caller will run + * set_huge_pte_at() on whatever we return, which writes out + * all of the sub-ptes for the hugepage range. So we have + * to give it the first such sub-pte. + */ + addr &= HPAGE_MASK; + pgd = pgd_offset(mm, addr); pud = pud_alloc(mm, pgd, addr); if (pud) { diff --git a/trunk/arch/um/Kconfig b/trunk/arch/um/Kconfig index 5982fe2753e0..05fbb20636cb 100644 --- a/trunk/arch/um/Kconfig +++ b/trunk/arch/um/Kconfig @@ -22,6 +22,9 @@ config SBUS config PCI bool +config PCMCIA + bool + config GENERIC_CALIBRATE_DELAY bool default y diff --git a/trunk/arch/um/Makefile b/trunk/arch/um/Makefile index 8d14c7a831be..24790bed2054 100644 --- a/trunk/arch/um/Makefile +++ b/trunk/arch/um/Makefile @@ -20,7 +20,7 @@ core-y += $(ARCH_DIR)/kernel/ \ # Have to precede the include because the included Makefiles reference them. SYMLINK_HEADERS := archparam.h system.h sigcontext.h processor.h ptrace.h \ - module.h vm-flags.h elf.h ldt.h + module.h vm-flags.h elf.h host_ldt.h SYMLINK_HEADERS := $(foreach header,$(SYMLINK_HEADERS),include/asm-um/$(header)) # XXX: The "os" symlink is only used by arch/um/include/os.h, which includes @@ -129,7 +129,7 @@ CPPFLAGS_vmlinux.lds = -U$(SUBARCH) \ -DSTART=$(START) -DELF_ARCH=$(ELF_ARCH) \ -DELF_FORMAT="$(ELF_FORMAT)" $(CPP_MODE-y) \ -DKERNEL_STACK_SIZE=$(STACK_SIZE) \ - -DUNMAP_PATH=arch/um/sys-$(SUBARCH)/unmap_fin.o + -DUNMAP_PATH=arch/um/sys-$(SUBARCH)/unmap.o #The wrappers will select whether using "malloc" or the kernel allocator. LINK_WRAPS = -Wl,--wrap,malloc -Wl,--wrap,free -Wl,--wrap,calloc @@ -150,8 +150,7 @@ CLEAN_FILES += linux x.i gmon.out $(ARCH_DIR)/include/uml-config.h \ $(ARCH_DIR)/include/user_constants.h \ $(ARCH_DIR)/include/kern_constants.h $(ARCH_DIR)/Kconfig.arch -MRPROPER_FILES += $(SYMLINK_HEADERS) $(ARCH_SYMLINKS) \ - $(addprefix $(ARCH_DIR)/kernel/,$(KERN_SYMLINKS)) $(ARCH_DIR)/os +MRPROPER_FILES += $(ARCH_SYMLINKS) archclean: @find . \( -name '*.bb' -o -name '*.bbg' -o -name '*.da' \ diff --git a/trunk/arch/um/Makefile-x86_64 b/trunk/arch/um/Makefile-x86_64 index 38df311e75dc..dfd88b652fbe 100644 --- a/trunk/arch/um/Makefile-x86_64 +++ b/trunk/arch/um/Makefile-x86_64 @@ -1,7 +1,7 @@ # Copyright 2003 - 2004 Pathscale, Inc # Released under the GPL -libs-y += arch/um/sys-x86_64/ +core-y += arch/um/sys-x86_64/ START := 0x60000000 #We #undef __x86_64__ for kernelspace, not for userspace where diff --git a/trunk/arch/um/drivers/daemon_kern.c b/trunk/arch/um/drivers/daemon_kern.c index a61b7b46bc02..53d09ed78b42 100644 --- a/trunk/arch/um/drivers/daemon_kern.c +++ b/trunk/arch/um/drivers/daemon_kern.c @@ -95,18 +95,7 @@ static struct transport daemon_transport = { static int register_daemon(void) { register_transport(&daemon_transport); - return(1); + return 0; } __initcall(register_daemon); - -/* - * Overrides for Emacs so that we follow Linus's tabbing style. - * Emacs will notice this stuff at the end of the file and automatically - * adjust the settings for this buffer only. This must remain at the end - * of the file. - * --------------------------------------------------------------------------- - * Local variables: - * c-file-style: "linux" - * End: - */ diff --git a/trunk/arch/um/drivers/harddog_kern.c b/trunk/arch/um/drivers/harddog_kern.c index 49acb2badf32..d18a974735e6 100644 --- a/trunk/arch/um/drivers/harddog_kern.c +++ b/trunk/arch/um/drivers/harddog_kern.c @@ -104,7 +104,7 @@ static int harddog_release(struct inode *inode, struct file *file) extern int ping_watchdog(int fd); -static ssize_t harddog_write(struct file *file, const char *data, size_t len, +static ssize_t harddog_write(struct file *file, const char __user *data, size_t len, loff_t *ppos) { /* @@ -118,6 +118,7 @@ static ssize_t harddog_write(struct file *file, const char *data, size_t len, static int harddog_ioctl(struct inode *inode, struct file *file, unsigned int cmd, unsigned long arg) { + void __user *argp= (void __user *)arg; static struct watchdog_info ident = { WDIOC_SETTIMEOUT, 0, @@ -127,13 +128,12 @@ static int harddog_ioctl(struct inode *inode, struct file *file, default: return -ENOTTY; case WDIOC_GETSUPPORT: - if(copy_to_user((struct harddog_info *)arg, &ident, - sizeof(ident))) + if(copy_to_user(argp, &ident, sizeof(ident))) return -EFAULT; return 0; case WDIOC_GETSTATUS: case WDIOC_GETBOOTSTATUS: - return put_user(0,(int *)arg); + return put_user(0,(int __user *)argp); case WDIOC_KEEPALIVE: return(ping_watchdog(harddog_out_fd)); } diff --git a/trunk/arch/um/drivers/hostaudio_kern.c b/trunk/arch/um/drivers/hostaudio_kern.c index 59602b81b240..37232f908cd7 100644 --- a/trunk/arch/um/drivers/hostaudio_kern.c +++ b/trunk/arch/um/drivers/hostaudio_kern.c @@ -67,8 +67,8 @@ MODULE_PARM_DESC(mixer, MIXER_HELP); /* /dev/dsp file operations */ -static ssize_t hostaudio_read(struct file *file, char *buffer, size_t count, - loff_t *ppos) +static ssize_t hostaudio_read(struct file *file, char __user *buffer, + size_t count, loff_t *ppos) { struct hostaudio_state *state = file->private_data; void *kbuf; @@ -94,7 +94,7 @@ static ssize_t hostaudio_read(struct file *file, char *buffer, size_t count, return(err); } -static ssize_t hostaudio_write(struct file *file, const char *buffer, +static ssize_t hostaudio_write(struct file *file, const char __user *buffer, size_t count, loff_t *ppos) { struct hostaudio_state *state = file->private_data; @@ -152,7 +152,7 @@ static int hostaudio_ioctl(struct inode *inode, struct file *file, case SNDCTL_DSP_CHANNELS: case SNDCTL_DSP_SUBDIVIDE: case SNDCTL_DSP_SETFRAGMENT: - if(get_user(data, (int *) arg)) + if(get_user(data, (int __user *) arg)) return(-EFAULT); break; default: @@ -168,7 +168,7 @@ static int hostaudio_ioctl(struct inode *inode, struct file *file, case SNDCTL_DSP_CHANNELS: case SNDCTL_DSP_SUBDIVIDE: case SNDCTL_DSP_SETFRAGMENT: - if(put_user(data, (int *) arg)) + if(put_user(data, (int __user *) arg)) return(-EFAULT); break; default: diff --git a/trunk/arch/um/drivers/mcast_kern.c b/trunk/arch/um/drivers/mcast_kern.c index c9b078fba03e..3a7af18cf944 100644 --- a/trunk/arch/um/drivers/mcast_kern.c +++ b/trunk/arch/um/drivers/mcast_kern.c @@ -124,18 +124,7 @@ static struct transport mcast_transport = { static int register_mcast(void) { register_transport(&mcast_transport); - return(1); + return 0; } __initcall(register_mcast); - -/* - * Overrides for Emacs so that we follow Linus's tabbing style. - * Emacs will notice this stuff at the end of the file and automatically - * adjust the settings for this buffer only. This must remain at the end - * of the file. - * --------------------------------------------------------------------------- - * Local variables: - * c-file-style: "linux" - * End: - */ diff --git a/trunk/arch/um/drivers/mconsole_kern.c b/trunk/arch/um/drivers/mconsole_kern.c index 1488816588ea..28e3760e8b98 100644 --- a/trunk/arch/um/drivers/mconsole_kern.c +++ b/trunk/arch/um/drivers/mconsole_kern.c @@ -20,6 +20,8 @@ #include "linux/namei.h" #include "linux/proc_fs.h" #include "linux/syscalls.h" +#include "linux/list.h" +#include "linux/mm.h" #include "linux/console.h" #include "asm/irq.h" #include "asm/uaccess.h" @@ -347,6 +349,142 @@ static struct mc_device *mconsole_find_dev(char *name) return(NULL); } +#define UNPLUGGED_PER_PAGE \ + ((PAGE_SIZE - sizeof(struct list_head)) / sizeof(unsigned long)) + +struct unplugged_pages { + struct list_head list; + void *pages[UNPLUGGED_PER_PAGE]; +}; + +static unsigned long long unplugged_pages_count = 0; +static struct list_head unplugged_pages = LIST_HEAD_INIT(unplugged_pages); +static int unplug_index = UNPLUGGED_PER_PAGE; + +static int mem_config(char *str) +{ + unsigned long long diff; + int err = -EINVAL, i, add; + char *ret; + + if(str[0] != '=') + goto out; + + str++; + if(str[0] == '-') + add = 0; + else if(str[0] == '+'){ + add = 1; + } + else goto out; + + str++; + diff = memparse(str, &ret); + if(*ret != '\0') + goto out; + + diff /= PAGE_SIZE; + + for(i = 0; i < diff; i++){ + struct unplugged_pages *unplugged; + void *addr; + + if(add){ + if(list_empty(&unplugged_pages)) + break; + + unplugged = list_entry(unplugged_pages.next, + struct unplugged_pages, list); + if(unplug_index > 0) + addr = unplugged->pages[--unplug_index]; + else { + list_del(&unplugged->list); + addr = unplugged; + unplug_index = UNPLUGGED_PER_PAGE; + } + + free_page((unsigned long) addr); + unplugged_pages_count--; + } + else { + struct page *page; + + page = alloc_page(GFP_ATOMIC); + if(page == NULL) + break; + + unplugged = page_address(page); + if(unplug_index == UNPLUGGED_PER_PAGE){ + INIT_LIST_HEAD(&unplugged->list); + list_add(&unplugged->list, &unplugged_pages); + unplug_index = 0; + } + else { + struct list_head *entry = unplugged_pages.next; + addr = unplugged; + + unplugged = list_entry(entry, + struct unplugged_pages, + list); + unplugged->pages[unplug_index++] = addr; + err = os_drop_memory(addr, PAGE_SIZE); + if(err) + printk("Failed to release memory - " + "errno = %d\n", err); + } + + unplugged_pages_count++; + } + } + + err = 0; +out: + return err; +} + +static int mem_get_config(char *name, char *str, int size, char **error_out) +{ + char buf[sizeof("18446744073709551615")]; + int len = 0; + + sprintf(buf, "%ld", uml_physmem); + CONFIG_CHUNK(str, size, len, buf, 1); + + return len; +} + +static int mem_id(char **str, int *start_out, int *end_out) +{ + *start_out = 0; + *end_out = 0; + + return 0; +} + +static int mem_remove(int n) +{ + return -EBUSY; +} + +static struct mc_device mem_mc = { + .name = "mem", + .config = mem_config, + .get_config = mem_get_config, + .id = mem_id, + .remove = mem_remove, +}; + +static int mem_mc_init(void) +{ + if(can_drop_memory()) + mconsole_register_dev(&mem_mc); + else printk("Can't release memory to the host - memory hotplug won't " + "be supported\n"); + return 0; +} + +__initcall(mem_mc_init); + #define CONFIG_BUF_SIZE 64 static void mconsole_get_config(int (*get_config)(char *, char *, int, @@ -478,7 +616,7 @@ static void console_write(struct console *console, const char *string, return; while(1){ - n = min(len, ARRAY_SIZE(console_buf) - console_index); + n = min((size_t)len, ARRAY_SIZE(console_buf) - console_index); strncpy(&console_buf[console_index], string, n); console_index += n; string += n; diff --git a/trunk/arch/um/drivers/pcap_kern.c b/trunk/arch/um/drivers/pcap_kern.c index 07c80f2156ef..466ff2c2f918 100644 --- a/trunk/arch/um/drivers/pcap_kern.c +++ b/trunk/arch/um/drivers/pcap_kern.c @@ -106,18 +106,7 @@ static struct transport pcap_transport = { static int register_pcap(void) { register_transport(&pcap_transport); - return(1); + return 0; } __initcall(register_pcap); - -/* - * Overrides for Emacs so that we follow Linus's tabbing style. - * Emacs will notice this stuff at the end of the file and automatically - * adjust the settings for this buffer only. This must remain at the end - * of the file. - * --------------------------------------------------------------------------- - * Local variables: - * c-file-style: "linux" - * End: - */ diff --git a/trunk/arch/um/drivers/slip_kern.c b/trunk/arch/um/drivers/slip_kern.c index a62f5ef445cf..163ee0d5f75e 100644 --- a/trunk/arch/um/drivers/slip_kern.c +++ b/trunk/arch/um/drivers/slip_kern.c @@ -93,18 +93,7 @@ static struct transport slip_transport = { static int register_slip(void) { register_transport(&slip_transport); - return(1); + return 0; } __initcall(register_slip); - -/* - * Overrides for Emacs so that we follow Linus's tabbing style. - * Emacs will notice this stuff at the end of the file and automatically - * adjust the settings for this buffer only. This must remain at the end - * of the file. - * --------------------------------------------------------------------------- - * Local variables: - * c-file-style: "linux" - * End: - */ diff --git a/trunk/arch/um/drivers/slirp_kern.c b/trunk/arch/um/drivers/slirp_kern.c index 33d7982be5d3..95e50c943e14 100644 --- a/trunk/arch/um/drivers/slirp_kern.c +++ b/trunk/arch/um/drivers/slirp_kern.c @@ -77,7 +77,7 @@ static int slirp_setup(char *str, char **mac_out, void *data) int i=0; *init = ((struct slirp_init) - { argw : { { "slirp", NULL } } }); + { .argw = { { "slirp", NULL } } }); str = split_if_spec(str, mac_out, NULL); @@ -116,18 +116,7 @@ static struct transport slirp_transport = { static int register_slirp(void) { register_transport(&slirp_transport); - return(1); + return 0; } __initcall(register_slirp); - -/* - * Overrides for Emacs so that we follow Linus's tabbing style. - * Emacs will notice this stuff at the end of the file and automatically - * adjust the settings for this buffer only. This must remain at the end - * of the file. - * --------------------------------------------------------------------------- - * Local variables: - * c-file-style: "linux" - * End: - */ diff --git a/trunk/arch/um/drivers/ubd_kern.c b/trunk/arch/um/drivers/ubd_kern.c index 0336575d2448..0897852b09a3 100644 --- a/trunk/arch/um/drivers/ubd_kern.c +++ b/trunk/arch/um/drivers/ubd_kern.c @@ -891,7 +891,7 @@ int ubd_driver_init(void){ SA_INTERRUPT, "ubd", ubd_dev); if(err != 0) printk(KERN_ERR "um_request_irq failed - errno = %d\n", -err); - return(err); + return 0; } device_initcall(ubd_driver_init); diff --git a/trunk/arch/um/include/kern_util.h b/trunk/arch/um/include/kern_util.h index 07176d92e1c9..42557130a408 100644 --- a/trunk/arch/um/include/kern_util.h +++ b/trunk/arch/um/include/kern_util.h @@ -116,7 +116,11 @@ extern void *get_current(void); extern struct task_struct *get_task(int pid, int require); extern void machine_halt(void); extern int is_syscall(unsigned long addr); -extern void arch_switch(void); + +extern void arch_switch_to_tt(struct task_struct *from, struct task_struct *to); + +extern void arch_switch_to_skas(struct task_struct *from, struct task_struct *to); + extern void free_irq(unsigned int, void *); extern int cpu(void); diff --git a/trunk/arch/um/include/line.h b/trunk/arch/um/include/line.h index 6f4d680dc1d4..6ac0f8252e21 100644 --- a/trunk/arch/um/include/line.h +++ b/trunk/arch/um/include/line.h @@ -58,23 +58,17 @@ struct line { }; #define LINE_INIT(str, d) \ - { init_str : str, \ - init_pri : INIT_STATIC, \ - valid : 1, \ - throttled : 0, \ - lock : SPIN_LOCK_UNLOCKED, \ - buffer : NULL, \ - head : NULL, \ - tail : NULL, \ - sigio : 0, \ - driver : d, \ - have_irq : 0 } + { .init_str = str, \ + .init_pri = INIT_STATIC, \ + .valid = 1, \ + .lock = SPIN_LOCK_UNLOCKED, \ + .driver = d } struct lines { int num; }; -#define LINES_INIT(n) { num : n } +#define LINES_INIT(n) { .num = n } extern void line_close(struct tty_struct *tty, struct file * filp); extern int line_open(struct line *lines, struct tty_struct *tty); diff --git a/trunk/arch/um/include/mem_user.h b/trunk/arch/um/include/mem_user.h index a1064c5823bf..a54514d2cc3a 100644 --- a/trunk/arch/um/include/mem_user.h +++ b/trunk/arch/um/include/mem_user.h @@ -49,7 +49,6 @@ extern int iomem_size; extern unsigned long host_task_size; extern unsigned long task_size; -extern void check_devanon(void); extern int init_mem_user(void); extern void setup_memory(void *entry); extern unsigned long find_iomem(char *driver, unsigned long *len_out); diff --git a/trunk/arch/um/include/os.h b/trunk/arch/um/include/os.h index d3d1bc6074ef..f88856c28a66 100644 --- a/trunk/arch/um/include/os.h +++ b/trunk/arch/um/include/os.h @@ -13,6 +13,7 @@ #include "kern_util.h" #include "skas/mm_id.h" #include "irq_user.h" +#include "sysdep/tls.h" #define OS_TYPE_FILE 1 #define OS_TYPE_DIR 2 @@ -172,6 +173,7 @@ extern int os_fchange_dir(int fd); extern void os_early_checks(void); extern int can_do_skas(void); extern void os_check_bugs(void); +extern void check_host_supports_tls(int *supports_tls, int *tls_min); /* Make sure they are clear when running in TT mode. Required by * SEGV_MAYBE_FIXABLE */ @@ -205,6 +207,8 @@ extern int os_map_memory(void *virt, int fd, unsigned long long off, extern int os_protect_memory(void *addr, unsigned long len, int r, int w, int x); extern int os_unmap_memory(void *addr, int len); +extern int os_drop_memory(void *addr, int length); +extern int can_drop_memory(void); extern void os_flush_stdout(void); /* tt.c @@ -234,8 +238,12 @@ extern int run_helper_thread(int (*proc)(void *), void *arg, int stack_order); extern int helper_wait(int pid); -/* umid.c */ +/* tls.c */ +extern int os_set_thread_area(user_desc_t *info, int pid); +extern int os_get_thread_area(user_desc_t *info, int pid); + +/* umid.c */ extern int umid_file_name(char *name, char *buf, int len); extern int set_umid(char *name); extern char *get_umid(void); diff --git a/trunk/arch/um/include/sysdep-i386/checksum.h b/trunk/arch/um/include/sysdep-i386/checksum.h index 7d3d202d7fff..052bb061a978 100644 --- a/trunk/arch/um/include/sysdep-i386/checksum.h +++ b/trunk/arch/um/include/sysdep-i386/checksum.h @@ -48,7 +48,8 @@ unsigned int csum_partial_copy_nocheck(const unsigned char *src, unsigned char * */ static __inline__ -unsigned int csum_partial_copy_from_user(const unsigned char *src, unsigned char *dst, +unsigned int csum_partial_copy_from_user(const unsigned char __user *src, + unsigned char *dst, int len, int sum, int *err_ptr) { if(copy_from_user(dst, src, len)){ @@ -192,7 +193,7 @@ static __inline__ unsigned short int csum_ipv6_magic(struct in6_addr *saddr, */ #define HAVE_CSUM_COPY_USER static __inline__ unsigned int csum_and_copy_to_user(const unsigned char *src, - unsigned char *dst, + unsigned char __user *dst, int len, int sum, int *err_ptr) { if (access_ok(VERIFY_WRITE, dst, len)){ diff --git a/trunk/arch/um/include/sysdep-i386/ptrace.h b/trunk/arch/um/include/sysdep-i386/ptrace.h index c8ee9559f3ab..6670cc992ecb 100644 --- a/trunk/arch/um/include/sysdep-i386/ptrace.h +++ b/trunk/arch/um/include/sysdep-i386/ptrace.h @@ -14,7 +14,12 @@ #define MAX_REG_NR (UM_FRAME_SIZE / sizeof(unsigned long)) #define MAX_REG_OFFSET (UM_FRAME_SIZE) +#ifdef UML_CONFIG_PT_PROXY extern void update_debugregs(int seq); +#else +static inline void update_debugregs(int seq) {} +#endif + /* syscall emulation path in ptrace */ diff --git a/trunk/arch/um/include/sysdep-i386/tls.h b/trunk/arch/um/include/sysdep-i386/tls.h new file mode 100644 index 000000000000..918fd3c5ff9c --- /dev/null +++ b/trunk/arch/um/include/sysdep-i386/tls.h @@ -0,0 +1,32 @@ +#ifndef _SYSDEP_TLS_H +#define _SYSDEP_TLS_H + +# ifndef __KERNEL__ + +/* Change name to avoid conflicts with the original one from , which + * may be named user_desc (but in 2.4 and in header matching its API was named + * modify_ldt_ldt_s). */ + +typedef struct um_dup_user_desc { + unsigned int entry_number; + unsigned int base_addr; + unsigned int limit; + unsigned int seg_32bit:1; + unsigned int contents:2; + unsigned int read_exec_only:1; + unsigned int limit_in_pages:1; + unsigned int seg_not_present:1; + unsigned int useable:1; +} user_desc_t; + +# else /* __KERNEL__ */ + +# include +typedef struct user_desc user_desc_t; + +# endif /* __KERNEL__ */ + +#define GDT_ENTRY_TLS_MIN_I386 6 +#define GDT_ENTRY_TLS_MIN_X86_64 12 + +#endif /* _SYSDEP_TLS_H */ diff --git a/trunk/arch/um/include/sysdep-x86_64/tls.h b/trunk/arch/um/include/sysdep-x86_64/tls.h new file mode 100644 index 000000000000..35f19f25bd3b --- /dev/null +++ b/trunk/arch/um/include/sysdep-x86_64/tls.h @@ -0,0 +1,29 @@ +#ifndef _SYSDEP_TLS_H +#define _SYSDEP_TLS_H + +# ifndef __KERNEL__ + +/* Change name to avoid conflicts with the original one from , which + * may be named user_desc (but in 2.4 and in header matching its API was named + * modify_ldt_ldt_s). */ + +typedef struct um_dup_user_desc { + unsigned int entry_number; + unsigned int base_addr; + unsigned int limit; + unsigned int seg_32bit:1; + unsigned int contents:2; + unsigned int read_exec_only:1; + unsigned int limit_in_pages:1; + unsigned int seg_not_present:1; + unsigned int useable:1; + unsigned int lm:1; +} user_desc_t; + +# else /* __KERNEL__ */ + +# include +typedef struct user_desc user_desc_t; + +# endif /* __KERNEL__ */ +#endif /* _SYSDEP_TLS_H */ diff --git a/trunk/arch/um/include/user_util.h b/trunk/arch/um/include/user_util.h index 992a7e1e0fca..fe0c29b5144d 100644 --- a/trunk/arch/um/include/user_util.h +++ b/trunk/arch/um/include/user_util.h @@ -8,6 +8,9 @@ #include "sysdep/ptrace.h" +/* Copied from kernel.h */ +#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0])) + #define CATCH_EINTR(expr) while ((errno = 0, ((expr) < 0)) && (errno == EINTR)) extern int mode_tt; @@ -31,7 +34,7 @@ extern unsigned long uml_physmem; extern unsigned long uml_reserved; extern unsigned long end_vm; extern unsigned long start_vm; -extern unsigned long highmem; +extern unsigned long long highmem; extern char host_info[]; diff --git a/trunk/arch/um/kernel/exec_kern.c b/trunk/arch/um/kernel/exec_kern.c index 1ca84319317d..c0cb627bf594 100644 --- a/trunk/arch/um/kernel/exec_kern.c +++ b/trunk/arch/um/kernel/exec_kern.c @@ -22,6 +22,7 @@ void flush_thread(void) { + arch_flush_thread(¤t->thread.arch); CHOOSE_MODE(flush_thread_tt(), flush_thread_skas()); } @@ -58,14 +59,14 @@ long um_execve(char *file, char __user *__user *argv, char __user *__user *env) return(err); } -long sys_execve(char *file, char __user *__user *argv, +long sys_execve(char __user *file, char __user *__user *argv, char __user *__user *env) { long error; char *filename; lock_kernel(); - filename = getname((char __user *) file); + filename = getname(file); error = PTR_ERR(filename); if (IS_ERR(filename)) goto out; error = execve1(filename, argv, env); @@ -74,14 +75,3 @@ long sys_execve(char *file, char __user *__user *argv, unlock_kernel(); return(error); } - -/* - * Overrides for Emacs so that we follow Linus's tabbing style. - * Emacs will notice this stuff at the end of the file and automatically - * adjust the settings for this buffer only. This must remain at the end - * of the file. - * --------------------------------------------------------------------------- - * Local variables: - * c-file-style: "linux" - * End: - */ diff --git a/trunk/arch/um/kernel/mem.c b/trunk/arch/um/kernel/mem.c index 92cce96b5e24..44e41a35f000 100644 --- a/trunk/arch/um/kernel/mem.c +++ b/trunk/arch/um/kernel/mem.c @@ -30,7 +30,7 @@ extern char __binary_start; unsigned long *empty_zero_page = NULL; unsigned long *empty_bad_page = NULL; pgd_t swapper_pg_dir[PTRS_PER_PGD]; -unsigned long highmem; +unsigned long long highmem; int kmalloc_ok = 0; static unsigned long brk_end; diff --git a/trunk/arch/um/kernel/process_kern.c b/trunk/arch/um/kernel/process_kern.c index 3113cab8675e..f6a5a502120b 100644 --- a/trunk/arch/um/kernel/process_kern.c +++ b/trunk/arch/um/kernel/process_kern.c @@ -156,9 +156,25 @@ int copy_thread(int nr, unsigned long clone_flags, unsigned long sp, unsigned long stack_top, struct task_struct * p, struct pt_regs *regs) { + int ret; + p->thread = (struct thread_struct) INIT_THREAD; - return(CHOOSE_MODE_PROC(copy_thread_tt, copy_thread_skas, nr, - clone_flags, sp, stack_top, p, regs)); + ret = CHOOSE_MODE_PROC(copy_thread_tt, copy_thread_skas, nr, + clone_flags, sp, stack_top, p, regs); + + if (ret || !current->thread.forking) + goto out; + + clear_flushed_tls(p); + + /* + * Set a new TLS for the child thread? + */ + if (clone_flags & CLONE_SETTLS) + ret = arch_copy_tls(p); + +out: + return ret; } void initial_thread_cb(void (*proc)(void *), void *arg) @@ -185,10 +201,6 @@ void default_idle(void) { CHOOSE_MODE(uml_idle_timer(), (void) 0); - atomic_inc(&init_mm.mm_count); - current->mm = &init_mm; - current->active_mm = &init_mm; - while(1){ /* endless idle loop with no priority at all */ @@ -407,7 +419,7 @@ static int proc_read_sysemu(char *buf, char **start, off_t offset, int size,int return strlen(buf); } -static int proc_write_sysemu(struct file *file,const char *buf, unsigned long count,void *data) +static int proc_write_sysemu(struct file *file,const char __user *buf, unsigned long count,void *data) { char tmp[2]; diff --git a/trunk/arch/um/kernel/ptrace.c b/trunk/arch/um/kernel/ptrace.c index 98e09395c093..60d2eda995c1 100644 --- a/trunk/arch/um/kernel/ptrace.c +++ b/trunk/arch/um/kernel/ptrace.c @@ -46,6 +46,7 @@ extern int poke_user(struct task_struct * child, long addr, long data); long arch_ptrace(struct task_struct *child, long request, long addr, long data) { int i, ret; + unsigned long __user *p = (void __user *)(unsigned long)data; switch (request) { /* when I and D space are separate, these will need to be fixed. */ @@ -58,7 +59,7 @@ long arch_ptrace(struct task_struct *child, long request, long addr, long data) copied = access_process_vm(child, addr, &tmp, sizeof(tmp), 0); if (copied != sizeof(tmp)) break; - ret = put_user(tmp, (unsigned long __user *) data); + ret = put_user(tmp, p); break; } @@ -136,15 +137,13 @@ long arch_ptrace(struct task_struct *child, long request, long addr, long data) #ifdef PTRACE_GETREGS case PTRACE_GETREGS: { /* Get all gp regs from the child. */ - if (!access_ok(VERIFY_WRITE, (unsigned long *)data, - MAX_REG_OFFSET)) { + if (!access_ok(VERIFY_WRITE, p, MAX_REG_OFFSET)) { ret = -EIO; break; } for ( i = 0; i < MAX_REG_OFFSET; i += sizeof(long) ) { - __put_user(getreg(child, i), - (unsigned long __user *) data); - data += sizeof(long); + __put_user(getreg(child, i), p); + p++; } ret = 0; break; @@ -153,15 +152,14 @@ long arch_ptrace(struct task_struct *child, long request, long addr, long data) #ifdef PTRACE_SETREGS case PTRACE_SETREGS: { /* Set all gp regs in the child. */ unsigned long tmp = 0; - if (!access_ok(VERIFY_READ, (unsigned *)data, - MAX_REG_OFFSET)) { + if (!access_ok(VERIFY_READ, p, MAX_REG_OFFSET)) { ret = -EIO; break; } for ( i = 0; i < MAX_REG_OFFSET; i += sizeof(long) ) { - __get_user(tmp, (unsigned long __user *) data); + __get_user(tmp, p); putreg(child, i, tmp); - data += sizeof(long); + p++; } ret = 0; break; @@ -187,14 +185,23 @@ long arch_ptrace(struct task_struct *child, long request, long addr, long data) ret = set_fpxregs(data, child); break; #endif + case PTRACE_GET_THREAD_AREA: + ret = ptrace_get_thread_area(child, addr, + (struct user_desc __user *) data); + break; + + case PTRACE_SET_THREAD_AREA: + ret = ptrace_set_thread_area(child, addr, + (struct user_desc __user *) data); + break; + case PTRACE_FAULTINFO: { - /* Take the info from thread->arch->faultinfo, - * but transfer max. sizeof(struct ptrace_faultinfo). - * On i386, ptrace_faultinfo is smaller! - */ - ret = copy_to_user((unsigned long __user *) data, - &child->thread.arch.faultinfo, - sizeof(struct ptrace_faultinfo)); + /* Take the info from thread->arch->faultinfo, + * but transfer max. sizeof(struct ptrace_faultinfo). + * On i386, ptrace_faultinfo is smaller! + */ + ret = copy_to_user(p, &child->thread.arch.faultinfo, + sizeof(struct ptrace_faultinfo)); if(ret) break; break; @@ -204,8 +211,7 @@ long arch_ptrace(struct task_struct *child, long request, long addr, long data) case PTRACE_LDT: { struct ptrace_ldt ldt; - if(copy_from_user(&ldt, (unsigned long __user *) data, - sizeof(ldt))){ + if(copy_from_user(&ldt, p, sizeof(ldt))){ ret = -EIO; break; } diff --git a/trunk/arch/um/kernel/skas/process_kern.c b/trunk/arch/um/kernel/skas/process_kern.c index 3f70a2e12f06..2135eaf98a93 100644 --- a/trunk/arch/um/kernel/skas/process_kern.c +++ b/trunk/arch/um/kernel/skas/process_kern.c @@ -35,6 +35,8 @@ void switch_to_skas(void *prev, void *next) switch_threads(&from->thread.mode.skas.switch_buf, to->thread.mode.skas.switch_buf); + arch_switch_to_skas(current->thread.prev_sched, current); + if(current->pid == 0) switch_timers(1); } @@ -89,10 +91,17 @@ void fork_handler(int sig) panic("blech"); schedule_tail(current->thread.prev_sched); + + /* XXX: if interrupt_end() calls schedule, this call to + * arch_switch_to_skas isn't needed. We could want to apply this to + * improve performance. -bb */ + arch_switch_to_skas(current->thread.prev_sched, current); + current->thread.prev_sched = NULL; /* Handle any immediate reschedules or signals */ interrupt_end(); + userspace(¤t->thread.regs.regs); } @@ -109,6 +118,8 @@ int copy_thread_skas(int nr, unsigned long clone_flags, unsigned long sp, if(sp != 0) REGS_SP(p->thread.regs.regs.skas.regs) = sp; handler = fork_handler; + + arch_copy_thread(¤t->thread.arch, &p->thread.arch); } else { init_thread_registers(&p->thread.regs.regs); diff --git a/trunk/arch/um/kernel/syscall_kern.c b/trunk/arch/um/kernel/syscall_kern.c index 8e1a3501ff46..37d3978337d8 100644 --- a/trunk/arch/um/kernel/syscall_kern.c +++ b/trunk/arch/um/kernel/syscall_kern.c @@ -104,7 +104,7 @@ long sys_pipe(unsigned long __user * fildes) } -long sys_uname(struct old_utsname * name) +long sys_uname(struct old_utsname __user * name) { long err; if (!name) @@ -115,7 +115,7 @@ long sys_uname(struct old_utsname * name) return err?-EFAULT:0; } -long sys_olduname(struct oldold_utsname * name) +long sys_olduname(struct oldold_utsname __user * name) { long error; diff --git a/trunk/arch/um/kernel/trap_kern.c b/trunk/arch/um/kernel/trap_kern.c index d56046c2aba2..02f6d4d8dc3a 100644 --- a/trunk/arch/um/kernel/trap_kern.c +++ b/trunk/arch/um/kernel/trap_kern.c @@ -198,7 +198,7 @@ unsigned long segv(struct faultinfo fi, unsigned long ip, int is_user, void *sc) si.si_signo = SIGBUS; si.si_errno = 0; si.si_code = BUS_ADRERR; - si.si_addr = (void *)address; + si.si_addr = (void __user *)address; current->thread.arch.faultinfo = fi; force_sig_info(SIGBUS, &si, current); } else if (err == -ENOMEM) { @@ -207,7 +207,7 @@ unsigned long segv(struct faultinfo fi, unsigned long ip, int is_user, void *sc) } else { BUG_ON(err != -EFAULT); si.si_signo = SIGSEGV; - si.si_addr = (void *) address; + si.si_addr = (void __user *) address; current->thread.arch.faultinfo = fi; force_sig_info(SIGSEGV, &si, current); } @@ -220,8 +220,8 @@ void bad_segv(struct faultinfo fi, unsigned long ip) si.si_signo = SIGSEGV; si.si_code = SEGV_ACCERR; - si.si_addr = (void *) FAULT_ADDRESS(fi); - current->thread.arch.faultinfo = fi; + si.si_addr = (void __user *) FAULT_ADDRESS(fi); + current->thread.arch.faultinfo = fi; force_sig_info(SIGSEGV, &si, current); } diff --git a/trunk/arch/um/kernel/tt/process_kern.c b/trunk/arch/um/kernel/tt/process_kern.c index 295c1ac817b3..a9c1443fc548 100644 --- a/trunk/arch/um/kernel/tt/process_kern.c +++ b/trunk/arch/um/kernel/tt/process_kern.c @@ -51,6 +51,13 @@ void switch_to_tt(void *prev, void *next) c = 0; + /* Notice that here we "up" the semaphore on which "to" is waiting, and + * below (the read) we wait on this semaphore (which is implemented by + * switch_pipe) and go sleeping. Thus, after that, we have resumed in + * "to", and can't use any more the value of "from" (which is outdated), + * nor the value in "to" (since it was the task which stole us the CPU, + * which we don't care about). */ + err = os_write_file(to->thread.mode.tt.switch_pipe[1], &c, sizeof(c)); if(err != sizeof(c)) panic("write of switch_pipe failed, err = %d", -err); @@ -77,7 +84,7 @@ void switch_to_tt(void *prev, void *next) change_sig(SIGALRM, alrm); change_sig(SIGPROF, prof); - arch_switch(); + arch_switch_to_tt(prev_sched, current); flush_tlb_all(); local_irq_restore(flags); @@ -141,7 +148,6 @@ static void new_thread_handler(int sig) set_cmdline("(kernel thread)"); change_sig(SIGUSR1, 1); - change_sig(SIGVTALRM, 1); change_sig(SIGPROF, 1); local_irq_enable(); if(!run_kernel_thread(fn, arg, ¤t->thread.exec_buf)) diff --git a/trunk/arch/um/os-Linux/Makefile b/trunk/arch/um/os-Linux/Makefile index 1659386b42bb..f4bfc4c7ccac 100644 --- a/trunk/arch/um/os-Linux/Makefile +++ b/trunk/arch/um/os-Linux/Makefile @@ -4,7 +4,7 @@ # obj-y = aio.o elf_aux.o file.o helper.o irq.o main.o mem.o process.o sigio.o \ - signal.o start_up.o time.o trap.o tt.o tty.o uaccess.o umid.o \ + signal.o start_up.o time.o trap.o tt.o tty.o uaccess.o umid.o tls.o \ user_syms.o util.o drivers/ sys-$(SUBARCH)/ obj-$(CONFIG_MODE_SKAS) += skas/ @@ -12,12 +12,9 @@ obj-$(CONFIG_TTY_LOG) += tty_log.o user-objs-$(CONFIG_TTY_LOG) += tty_log.o USER_OBJS := $(user-objs-y) aio.o elf_aux.o file.o helper.o irq.o main.o mem.o \ - process.o sigio.o signal.o start_up.o time.o trap.o tt.o tty.o \ + process.o sigio.o signal.o start_up.o time.o trap.o tt.o tty.o tls.o \ uaccess.o umid.o util.o -elf_aux.o: $(ARCH_DIR)/kernel-offsets.h -CFLAGS_elf_aux.o += -I$(objtree)/arch/um - CFLAGS_user_syms.o += -DSUBARCH_$(SUBARCH) HAVE_AIO_ABI := $(shell [ -r /usr/include/linux/aio_abi.h ] && \ diff --git a/trunk/arch/um/os-Linux/drivers/ethertap_kern.c b/trunk/arch/um/os-Linux/drivers/ethertap_kern.c index 6ae4b19d9f50..768606bec233 100644 --- a/trunk/arch/um/os-Linux/drivers/ethertap_kern.c +++ b/trunk/arch/um/os-Linux/drivers/ethertap_kern.c @@ -102,18 +102,7 @@ static struct transport ethertap_transport = { static int register_ethertap(void) { register_transport(ðertap_transport); - return(1); + return 0; } __initcall(register_ethertap); - -/* - * Overrides for Emacs so that we follow Linus's tabbing style. - * Emacs will notice this stuff at the end of the file and automatically - * adjust the settings for this buffer only. This must remain at the end - * of the file. - * --------------------------------------------------------------------------- - * Local variables: - * c-file-style: "linux" - * End: - */ diff --git a/trunk/arch/um/os-Linux/drivers/tuntap_kern.c b/trunk/arch/um/os-Linux/drivers/tuntap_kern.c index 4202b9ebad4c..190009a6f89c 100644 --- a/trunk/arch/um/os-Linux/drivers/tuntap_kern.c +++ b/trunk/arch/um/os-Linux/drivers/tuntap_kern.c @@ -87,18 +87,7 @@ static struct transport tuntap_transport = { static int register_tuntap(void) { register_transport(&tuntap_transport); - return(1); + return 0; } __initcall(register_tuntap); - -/* - * Overrides for Emacs so that we follow Linus's tabbing style. - * Emacs will notice this stuff at the end of the file and automatically - * adjust the settings for this buffer only. This must remain at the end - * of the file. - * --------------------------------------------------------------------------- - * Local variables: - * c-file-style: "linux" - * End: - */ diff --git a/trunk/arch/um/os-Linux/mem.c b/trunk/arch/um/os-Linux/mem.c index 9d7d69a523bb..6ab372da9657 100644 --- a/trunk/arch/um/os-Linux/mem.c +++ b/trunk/arch/um/os-Linux/mem.c @@ -121,36 +121,11 @@ int create_tmp_file(unsigned long long len) return(fd); } -static int create_anon_file(unsigned long long len) -{ - void *addr; - int fd; - - fd = open("/dev/anon", O_RDWR); - if(fd < 0) { - perror("opening /dev/anon"); - exit(1); - } - - addr = mmap(NULL, len, PROT_READ | PROT_WRITE, MAP_PRIVATE, fd, 0); - if(addr == MAP_FAILED){ - perror("mapping physmem file"); - exit(1); - } - munmap(addr, len); - - return(fd); -} - -extern int have_devanon; - int create_mem_file(unsigned long long len) { int err, fd; - if(have_devanon) - fd = create_anon_file(len); - else fd = create_tmp_file(len); + fd = create_tmp_file(len); err = os_set_exec_close(fd, 1); if(err < 0){ diff --git a/trunk/arch/um/os-Linux/process.c b/trunk/arch/um/os-Linux/process.c index d261888f39c4..8176b0b52047 100644 --- a/trunk/arch/um/os-Linux/process.c +++ b/trunk/arch/um/os-Linux/process.c @@ -11,6 +11,7 @@ #include #include #include +#include #include "ptrace_user.h" #include "os.h" #include "user.h" @@ -20,6 +21,7 @@ #include "kern_util.h" #include "longjmp.h" #include "skas_ptrace.h" +#include "kern_constants.h" #define ARBITRARY_ADDR -1 #define FAILURE_PID -1 @@ -187,6 +189,48 @@ int os_unmap_memory(void *addr, int len) return(0); } +#ifndef MADV_REMOVE +#define MADV_REMOVE 0x5 /* remove these pages & resources */ +#endif + +int os_drop_memory(void *addr, int length) +{ + int err; + + err = madvise(addr, length, MADV_REMOVE); + if(err < 0) + err = -errno; + return err; +} + +int can_drop_memory(void) +{ + void *addr; + int fd; + + printk("Checking host MADV_REMOVE support..."); + fd = create_mem_file(UM_KERN_PAGE_SIZE); + if(fd < 0){ + printk("Creating test memory file failed, err = %d\n", -fd); + return 0; + } + + addr = mmap64(NULL, UM_KERN_PAGE_SIZE, PROT_READ | PROT_WRITE, + MAP_PRIVATE, fd, 0); + if(addr == MAP_FAILED){ + printk("Mapping test memory file failed, err = %d\n", -errno); + return 0; + } + + if(madvise(addr, UM_KERN_PAGE_SIZE, MADV_REMOVE) != 0){ + printk("MADV_REMOVE failed, err = %d\n", -errno); + return 0; + } + + printk("OK\n"); + return 1; +} + void init_new_thread_stack(void *sig_stack, void (*usr1_handler)(int)) { int flags = 0, pages; diff --git a/trunk/arch/um/os-Linux/start_up.c b/trunk/arch/um/os-Linux/start_up.c index 32753131f8d8..387e26af301a 100644 --- a/trunk/arch/um/os-Linux/start_up.c +++ b/trunk/arch/um/os-Linux/start_up.c @@ -470,25 +470,6 @@ int can_do_skas(void) } #endif -int have_devanon = 0; - -/* Runs on boot kernel stack - already safe to use printk. */ - -void check_devanon(void) -{ - int fd; - - printk("Checking for /dev/anon on the host..."); - fd = open("/dev/anon", O_RDWR); - if(fd < 0){ - printk("Not available (open failed with errno %d)\n", errno); - return; - } - - printk("OK\n"); - have_devanon = 1; -} - int __init parse_iomem(char *str, int *add) { struct iomem_region *new; @@ -664,6 +645,5 @@ void os_check_bugs(void) { check_ptrace(); check_sigio(); - check_devanon(); } diff --git a/trunk/arch/um/os-Linux/sys-i386/Makefile b/trunk/arch/um/os-Linux/sys-i386/Makefile index 340ef26f5944..b3213613c41c 100644 --- a/trunk/arch/um/os-Linux/sys-i386/Makefile +++ b/trunk/arch/um/os-Linux/sys-i386/Makefile @@ -3,7 +3,7 @@ # Licensed under the GPL # -obj-$(CONFIG_MODE_SKAS) = registers.o +obj-$(CONFIG_MODE_SKAS) = registers.o tls.o USER_OBJS := $(obj-y) diff --git a/trunk/arch/um/os-Linux/sys-i386/tls.c b/trunk/arch/um/os-Linux/sys-i386/tls.c new file mode 100644 index 000000000000..ba21f0e04a2f --- /dev/null +++ b/trunk/arch/um/os-Linux/sys-i386/tls.c @@ -0,0 +1,33 @@ +#include +#include "sysdep/tls.h" +#include "user_util.h" + +static _syscall1(int, get_thread_area, user_desc_t *, u_info); + +/* Checks whether host supports TLS, and sets *tls_min according to the value + * valid on the host. + * i386 host have it == 6; x86_64 host have it == 12, for i386 emulation. */ +void check_host_supports_tls(int *supports_tls, int *tls_min) { + /* Values for x86 and x86_64.*/ + int val[] = {GDT_ENTRY_TLS_MIN_I386, GDT_ENTRY_TLS_MIN_X86_64}; + int i; + + for (i = 0; i < ARRAY_SIZE(val); i++) { + user_desc_t info; + info.entry_number = val[i]; + + if (get_thread_area(&info) == 0) { + *tls_min = val[i]; + *supports_tls = 1; + return; + } else { + if (errno == EINVAL) + continue; + else if (errno == ENOSYS) + *supports_tls = 0; + return; + } + } + + *supports_tls = 0; +} diff --git a/trunk/arch/um/os-Linux/tls.c b/trunk/arch/um/os-Linux/tls.c new file mode 100644 index 000000000000..9cb09a45546b --- /dev/null +++ b/trunk/arch/um/os-Linux/tls.c @@ -0,0 +1,76 @@ +#include +#include +#include +#include "sysdep/tls.h" +#include "uml-config.h" + +/* TLS support - we basically rely on the host's one.*/ + +/* In TT mode, this should be called only by the tracing thread, and makes sense + * only for PTRACE_SET_THREAD_AREA. In SKAS mode, it's used normally. + * + */ + +#ifndef PTRACE_GET_THREAD_AREA +#define PTRACE_GET_THREAD_AREA 25 +#endif + +#ifndef PTRACE_SET_THREAD_AREA +#define PTRACE_SET_THREAD_AREA 26 +#endif + +int os_set_thread_area(user_desc_t *info, int pid) +{ + int ret; + + ret = ptrace(PTRACE_SET_THREAD_AREA, pid, info->entry_number, + (unsigned long) info); + if (ret < 0) + ret = -errno; + return ret; +} + +#ifdef UML_CONFIG_MODE_SKAS + +int os_get_thread_area(user_desc_t *info, int pid) +{ + int ret; + + ret = ptrace(PTRACE_GET_THREAD_AREA, pid, info->entry_number, + (unsigned long) info); + if (ret < 0) + ret = -errno; + return ret; +} + +#endif + +#ifdef UML_CONFIG_MODE_TT +#include "linux/unistd.h" + +static _syscall1(int, get_thread_area, user_desc_t *, u_info); +static _syscall1(int, set_thread_area, user_desc_t *, u_info); + +int do_set_thread_area_tt(user_desc_t *info) +{ + int ret; + + ret = set_thread_area(info); + if (ret < 0) { + ret = -errno; + } + return ret; +} + +int do_get_thread_area_tt(user_desc_t *info) +{ + int ret; + + ret = get_thread_area(info); + if (ret < 0) { + ret = -errno; + } + return ret; +} + +#endif /* UML_CONFIG_MODE_TT */ diff --git a/trunk/arch/um/scripts/Makefile.rules b/trunk/arch/um/scripts/Makefile.rules index 2e41cabd3d93..b696b451774c 100644 --- a/trunk/arch/um/scripts/Makefile.rules +++ b/trunk/arch/um/scripts/Makefile.rules @@ -20,25 +20,7 @@ define unprofile $(patsubst -pg,,$(patsubst -fprofile-arcs -ftest-coverage,,$(1))) endef - -# cmd_make_link checks to see if the $(foo-dir) variable starts with a /. If -# so, it's considered to be a path relative to $(srcdir) rather than -# $(srcdir)/arch/$(SUBARCH). This is because x86_64 wants to get ldt.c from -# arch/um/sys-i386 rather than arch/i386 like the other borrowed files. So, -# it sets $(ldt.c-dir) to /arch/um/sys-i386. -quiet_cmd_make_link = SYMLINK $@ -cmd_make_link = rm -f $@; ln -sf $(srctree)$(if $(filter-out /%,$($(notdir $@)-dir)),/arch/$(SUBARCH))/$($(notdir $@)-dir)/$(notdir $@) $@ - -# this needs to be before the foreach, because targets does not accept -# complete paths like $(obj)/$(f). To make sure this works, use a := assignment -# or we will get $(obj)/$(f) in the "targets" value. -# Also, this forces you to use the := syntax when assigning to targets. -# Otherwise the line below will cause an infinite loop (if you don't know why, -# just do it). - -targets := $(targets) $(SYMLINKS) - -SYMLINKS := $(foreach f,$(SYMLINKS),$(obj)/$(f)) - -$(SYMLINKS): FORCE - $(call if_changed,make_link) +ifdef subarch-obj-y +obj-y += subarch.o +subarch-y = $(addprefix ../../$(SUBARCH)/,$(subarch-obj-y)) +endif diff --git a/trunk/arch/um/scripts/Makefile.unmap b/trunk/arch/um/scripts/Makefile.unmap deleted file mode 100644 index b2165188d942..000000000000 --- a/trunk/arch/um/scripts/Makefile.unmap +++ /dev/null @@ -1,22 +0,0 @@ -clean-files += unmap_tmp.o unmap_fin.o unmap.o - -ifdef CONFIG_MODE_TT - -#Always build unmap_fin.o -extra-y += unmap_fin.o -#Do dependency tracking for unmap.o (it will be always built, but won't get the tracking unless we use this). -targets += unmap.o - -#XXX: partially copied from arch/um/scripts/Makefile.rules -$(obj)/unmap.o: _c_flags = $(call unprofile,$(CFLAGS)) - -quiet_cmd_wrapld = LD $@ -define cmd_wrapld - $(LD) $(LDFLAGS) -r -o $(obj)/unmap_tmp.o $< ; \ - $(OBJCOPY) $(UML_OBJCOPYFLAGS) $(obj)/unmap_tmp.o $@ -G switcheroo -endef - -$(obj)/unmap_fin.o : $(obj)/unmap.o FORCE - $(call if_changed,wrapld) - -endif diff --git a/trunk/arch/um/sys-i386/Makefile b/trunk/arch/um/sys-i386/Makefile index f5fd5b0156d0..98b20b7bba4f 100644 --- a/trunk/arch/um/sys-i386/Makefile +++ b/trunk/arch/um/sys-i386/Makefile @@ -1,23 +1,18 @@ -obj-y := bitops.o bugs.o checksum.o delay.o fault.o ksyms.o ldt.o ptrace.o \ - ptrace_user.o semaphore.o signal.o sigcontext.o syscalls.o sysrq.o \ - sys_call_table.o +obj-y = bugs.o checksum.o delay.o fault.o ksyms.o ldt.o ptrace.o \ + ptrace_user.o signal.o sigcontext.o syscalls.o sysrq.o \ + sys_call_table.o tls.o obj-$(CONFIG_MODE_SKAS) += stub.o stub_segv.o -obj-$(CONFIG_HIGHMEM) += highmem.o -obj-$(CONFIG_MODULES) += module.o +subarch-obj-y = lib/bitops.o kernel/semaphore.o +subarch-obj-$(CONFIG_HIGHMEM) += mm/highmem.o +subarch-obj-$(CONFIG_MODULES) += kernel/module.o USER_OBJS := bugs.o ptrace_user.o sigcontext.o fault.o stub_segv.o -SYMLINKS = bitops.c semaphore.c highmem.c module.c - include arch/um/scripts/Makefile.rules -bitops.c-dir = lib -semaphore.c-dir = kernel -highmem.c-dir = mm -module.c-dir = kernel - -$(obj)/stub_segv.o : _c_flags = $(call unprofile,$(CFLAGS)) +extra-$(CONFIG_MODE_TT) += unmap.o -include arch/um/scripts/Makefile.unmap +$(obj)/stub_segv.o $(obj)/unmap.o: \ + _c_flags = $(call unprofile,$(CFLAGS)) diff --git a/trunk/arch/um/sys-i386/ptrace.c b/trunk/arch/um/sys-i386/ptrace.c index 8032a105949a..6028bc7cc01b 100644 --- a/trunk/arch/um/sys-i386/ptrace.c +++ b/trunk/arch/um/sys-i386/ptrace.c @@ -15,9 +15,22 @@ #include "sysdep/sigcontext.h" #include "sysdep/sc.h" -void arch_switch(void) +void arch_switch_to_tt(struct task_struct *from, struct task_struct *to) { - update_debugregs(current->thread.arch.debugregs_seq); + update_debugregs(to->thread.arch.debugregs_seq); + arch_switch_tls_tt(from, to); +} + +void arch_switch_to_skas(struct task_struct *from, struct task_struct *to) +{ + int err = arch_switch_tls_skas(from, to); + if (!err) + return; + + if (err != -EINVAL) + printk(KERN_WARNING "arch_switch_tls_skas failed, errno %d, not EINVAL\n", -err); + else + printk(KERN_WARNING "arch_switch_tls_skas failed, errno = EINVAL\n"); } int is_syscall(unsigned long addr) @@ -124,22 +137,22 @@ unsigned long getreg(struct task_struct *child, int regno) int peek_user(struct task_struct *child, long addr, long data) { /* read the word at location addr in the USER area. */ - unsigned long tmp; + unsigned long tmp; - if ((addr & 3) || addr < 0) - return -EIO; + if ((addr & 3) || addr < 0) + return -EIO; - tmp = 0; /* Default return condition */ - if(addr < MAX_REG_OFFSET){ - tmp = getreg(child, addr); - } - else if((addr >= offsetof(struct user, u_debugreg[0])) && - (addr <= offsetof(struct user, u_debugreg[7]))){ - addr -= offsetof(struct user, u_debugreg[0]); - addr = addr >> 2; - tmp = child->thread.arch.debugregs[addr]; - } - return put_user(tmp, (unsigned long *) data); + tmp = 0; /* Default return condition */ + if(addr < MAX_REG_OFFSET){ + tmp = getreg(child, addr); + } + else if((addr >= offsetof(struct user, u_debugreg[0])) && + (addr <= offsetof(struct user, u_debugreg[7]))){ + addr -= offsetof(struct user, u_debugreg[0]); + addr = addr >> 2; + tmp = child->thread.arch.debugregs[addr]; + } + return put_user(tmp, (unsigned long __user *) data); } struct i387_fxsave_struct { diff --git a/trunk/arch/um/sys-i386/ptrace_user.c b/trunk/arch/um/sys-i386/ptrace_user.c index 7c376c95de50..9f3bd8ed78f5 100644 --- a/trunk/arch/um/sys-i386/ptrace_user.c +++ b/trunk/arch/um/sys-i386/ptrace_user.c @@ -14,6 +14,7 @@ #include "sysdep/thread.h" #include "user.h" #include "os.h" +#include "uml-config.h" int ptrace_getregs(long pid, unsigned long *regs_out) { @@ -43,6 +44,7 @@ int ptrace_setfpregs(long pid, unsigned long *regs) return 0; } +/* All the below stuff is of interest for TT mode only */ static void write_debugregs(int pid, unsigned long *regs) { struct user *dummy; @@ -75,7 +77,6 @@ static void read_debugregs(int pid, unsigned long *regs) /* Accessed only by the tracing thread */ static unsigned long kernel_debugregs[8] = { [ 0 ... 7 ] = 0 }; -static int debugregs_seq = 0; void arch_enter_kernel(void *task, int pid) { @@ -89,6 +90,11 @@ void arch_leave_kernel(void *task, int pid) write_debugregs(pid, TASK_DEBUGREGS(task)); } +#ifdef UML_CONFIG_PT_PROXY +/* Accessed only by the tracing thread */ +static int debugregs_seq; + +/* Only called by the ptrace proxy */ void ptrace_pokeuser(unsigned long addr, unsigned long data) { if((addr < offsetof(struct user, u_debugreg[0])) || @@ -109,6 +115,7 @@ static void update_debugregs_cb(void *arg) write_debugregs(pid, kernel_debugregs); } +/* Optimized out in its header when not defined */ void update_debugregs(int seq) { int me; @@ -118,6 +125,7 @@ void update_debugregs(int seq) me = os_getpid(); initial_thread_cb(update_debugregs_cb, &me); } +#endif /* * Overrides for Emacs so that we follow Linus's tabbing style. diff --git a/trunk/arch/um/sys-i386/signal.c b/trunk/arch/um/sys-i386/signal.c index 33a40f5ef0d2..f5d0e1c37ea2 100644 --- a/trunk/arch/um/sys-i386/signal.c +++ b/trunk/arch/um/sys-i386/signal.c @@ -19,7 +19,7 @@ #include "skas.h" static int copy_sc_from_user_skas(struct pt_regs *regs, - struct sigcontext *from) + struct sigcontext __user *from) { struct sigcontext sc; unsigned long fpregs[HOST_FP_SIZE]; @@ -57,7 +57,7 @@ static int copy_sc_from_user_skas(struct pt_regs *regs, return(0); } -int copy_sc_to_user_skas(struct sigcontext *to, struct _fpstate *to_fp, +int copy_sc_to_user_skas(struct sigcontext *to, struct _fpstate __user *to_fp, struct pt_regs *regs, unsigned long sp) { struct sigcontext sc; @@ -92,7 +92,7 @@ int copy_sc_to_user_skas(struct sigcontext *to, struct _fpstate *to_fp, "errno = %d\n", err); return(1); } - to_fp = (to_fp ? to_fp : (struct _fpstate *) (to + 1)); + to_fp = (to_fp ? to_fp : (struct _fpstate __user *) (to + 1)); sc.fpstate = to_fp; if(err) @@ -113,10 +113,11 @@ int copy_sc_to_user_skas(struct sigcontext *to, struct _fpstate *to_fp, * saved pointer is in the kernel, but the sigcontext is in userspace, so we * copy_to_user it. */ -int copy_sc_from_user_tt(struct sigcontext *to, struct sigcontext *from, +int copy_sc_from_user_tt(struct sigcontext *to, struct sigcontext __user *from, int fpsize) { - struct _fpstate *to_fp, *from_fp; + struct _fpstate *to_fp; + struct _fpstate __user *from_fp; unsigned long sigs; int err; @@ -131,13 +132,14 @@ int copy_sc_from_user_tt(struct sigcontext *to, struct sigcontext *from, return(err); } -int copy_sc_to_user_tt(struct sigcontext *to, struct _fpstate *fp, +int copy_sc_to_user_tt(struct sigcontext *to, struct _fpstate __user *fp, struct sigcontext *from, int fpsize, unsigned long sp) { - struct _fpstate *to_fp, *from_fp; + struct _fpstate __user *to_fp; + struct _fpstate *from_fp; int err; - to_fp = (fp ? fp : (struct _fpstate *) (to + 1)); + to_fp = (fp ? fp : (struct _fpstate __user *) (to + 1)); from_fp = from->fpstate; err = copy_to_user(to, from, sizeof(*to)); @@ -165,7 +167,7 @@ static int copy_sc_from_user(struct pt_regs *to, void __user *from) return(ret); } -static int copy_sc_to_user(struct sigcontext *to, struct _fpstate *fp, +static int copy_sc_to_user(struct sigcontext *to, struct _fpstate __user *fp, struct pt_regs *from, unsigned long sp) { return(CHOOSE_MODE(copy_sc_to_user_tt(to, fp, UPT_SC(&from->regs), @@ -173,7 +175,7 @@ static int copy_sc_to_user(struct sigcontext *to, struct _fpstate *fp, copy_sc_to_user_skas(to, fp, from, sp))); } -static int copy_ucontext_to_user(struct ucontext *uc, struct _fpstate *fp, +static int copy_ucontext_to_user(struct ucontext __user *uc, struct _fpstate __user *fp, sigset_t *set, unsigned long sp) { int err = 0; @@ -188,7 +190,7 @@ static int copy_ucontext_to_user(struct ucontext *uc, struct _fpstate *fp, struct sigframe { - char *pretcode; + char __user *pretcode; int sig; struct sigcontext sc; struct _fpstate fpstate; @@ -198,10 +200,10 @@ struct sigframe struct rt_sigframe { - char *pretcode; + char __user *pretcode; int sig; - struct siginfo *pinfo; - void *puc; + struct siginfo __user *pinfo; + void __user *puc; struct siginfo info; struct ucontext uc; struct _fpstate fpstate; @@ -213,16 +215,16 @@ int setup_signal_stack_sc(unsigned long stack_top, int sig, sigset_t *mask) { struct sigframe __user *frame; - void *restorer; + void __user *restorer; unsigned long save_sp = PT_REGS_SP(regs); int err = 0; stack_top &= -8UL; - frame = (struct sigframe *) stack_top - 1; + frame = (struct sigframe __user *) stack_top - 1; if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame))) return 1; - restorer = (void *) frame->retcode; + restorer = frame->retcode; if(ka->sa.sa_flags & SA_RESTORER) restorer = ka->sa.sa_restorer; @@ -278,16 +280,16 @@ int setup_signal_stack_si(unsigned long stack_top, int sig, siginfo_t *info, sigset_t *mask) { struct rt_sigframe __user *frame; - void *restorer; + void __user *restorer; unsigned long save_sp = PT_REGS_SP(regs); int err = 0; stack_top &= -8UL; - frame = (struct rt_sigframe *) stack_top - 1; + frame = (struct rt_sigframe __user *) stack_top - 1; if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame))) return 1; - restorer = (void *) frame->retcode; + restorer = frame->retcode; if(ka->sa.sa_flags & SA_RESTORER) restorer = ka->sa.sa_restorer; @@ -333,7 +335,7 @@ int setup_signal_stack_si(unsigned long stack_top, int sig, long sys_sigreturn(struct pt_regs regs) { unsigned long sp = PT_REGS_SP(¤t->thread.regs); - struct sigframe __user *frame = (struct sigframe *)(sp - 8); + struct sigframe __user *frame = (struct sigframe __user *)(sp - 8); sigset_t set; struct sigcontext __user *sc = &frame->sc; unsigned long __user *oldmask = &sc->oldmask; @@ -365,8 +367,8 @@ long sys_sigreturn(struct pt_regs regs) long sys_rt_sigreturn(struct pt_regs regs) { - unsigned long __user sp = PT_REGS_SP(¤t->thread.regs); - struct rt_sigframe __user *frame = (struct rt_sigframe *) (sp - 4); + unsigned long sp = PT_REGS_SP(¤t->thread.regs); + struct rt_sigframe __user *frame = (struct rt_sigframe __user *) (sp - 4); sigset_t set; struct ucontext __user *uc = &frame->uc; int sig_size = _NSIG_WORDS * sizeof(unsigned long); diff --git a/trunk/arch/um/sys-i386/sys_call_table.S b/trunk/arch/um/sys-i386/sys_call_table.S index ad75c27afe38..1ff61474b25c 100644 --- a/trunk/arch/um/sys-i386/sys_call_table.S +++ b/trunk/arch/um/sys-i386/sys_call_table.S @@ -6,8 +6,6 @@ #define sys_vm86old sys_ni_syscall #define sys_vm86 sys_ni_syscall -#define sys_set_thread_area sys_ni_syscall -#define sys_get_thread_area sys_ni_syscall #define sys_stime um_stime #define sys_time um_time diff --git a/trunk/arch/um/sys-i386/syscalls.c b/trunk/arch/um/sys-i386/syscalls.c index 83e9be820a86..749dd1bfe60f 100644 --- a/trunk/arch/um/sys-i386/syscalls.c +++ b/trunk/arch/um/sys-i386/syscalls.c @@ -61,21 +61,27 @@ long old_select(struct sel_arg_struct __user *arg) return sys_select(a.n, a.inp, a.outp, a.exp, a.tvp); } -/* The i386 version skips reading from %esi, the fourth argument. So we must do - * this, too. +/* + * The prototype on i386 is: + * + * int clone(int flags, void * child_stack, int * parent_tidptr, struct user_desc * newtls, int * child_tidptr) + * + * and the "newtls" arg. on i386 is read by copy_thread directly from the + * register saved on the stack. */ long sys_clone(unsigned long clone_flags, unsigned long newsp, - int __user *parent_tid, int unused, int __user *child_tid) + int __user *parent_tid, void *newtls, int __user *child_tid) { long ret; if (!newsp) newsp = UPT_SP(¤t->thread.regs.regs); + current->thread.forking = 1; ret = do_fork(clone_flags, newsp, ¤t->thread.regs, 0, parent_tid, child_tid); current->thread.forking = 0; - return(ret); + return ret; } /* @@ -104,7 +110,7 @@ long sys_ipc (uint call, int first, int second, union semun fourth; if (!ptr) return -EINVAL; - if (get_user(fourth.__pad, (void **) ptr)) + if (get_user(fourth.__pad, (void __user * __user *) ptr)) return -EFAULT; return sys_semctl (first, second, third, fourth); } diff --git a/trunk/arch/um/sys-i386/tls.c b/trunk/arch/um/sys-i386/tls.c new file mode 100644 index 000000000000..a3188e861cc7 --- /dev/null +++ b/trunk/arch/um/sys-i386/tls.c @@ -0,0 +1,384 @@ +/* + * Copyright (C) 2005 Paolo 'Blaisorblade' Giarrusso + * Licensed under the GPL + */ + +#include "linux/config.h" +#include "linux/kernel.h" +#include "linux/sched.h" +#include "linux/slab.h" +#include "linux/types.h" +#include "asm/uaccess.h" +#include "asm/ptrace.h" +#include "asm/segment.h" +#include "asm/smp.h" +#include "asm/desc.h" +#include "choose-mode.h" +#include "kern.h" +#include "kern_util.h" +#include "mode_kern.h" +#include "os.h" +#include "mode.h" + +#ifdef CONFIG_MODE_SKAS +#include "skas.h" +#endif + +/* If needed we can detect when it's uninitialized. */ +static int host_supports_tls = -1; +int host_gdt_entry_tls_min = -1; + +#ifdef CONFIG_MODE_SKAS +int do_set_thread_area_skas(struct user_desc *info) +{ + int ret; + u32 cpu; + + cpu = get_cpu(); + ret = os_set_thread_area(info, userspace_pid[cpu]); + put_cpu(); + return ret; +} + +int do_get_thread_area_skas(struct user_desc *info) +{ + int ret; + u32 cpu; + + cpu = get_cpu(); + ret = os_get_thread_area(info, userspace_pid[cpu]); + put_cpu(); + return ret; +} +#endif + +/* + * sys_get_thread_area: get a yet unused TLS descriptor index. + * XXX: Consider leaving one free slot for glibc usage at first place. This must + * be done here (and by changing GDT_ENTRY_TLS_* macros) and nowhere else. + * + * Also, this must be tested when compiling in SKAS mode with dinamic linking + * and running against NPTL. + */ +static int get_free_idx(struct task_struct* task) +{ + struct thread_struct *t = &task->thread; + int idx; + + if (!t->arch.tls_array) + return GDT_ENTRY_TLS_MIN; + + for (idx = 0; idx < GDT_ENTRY_TLS_ENTRIES; idx++) + if (!t->arch.tls_array[idx].present) + return idx + GDT_ENTRY_TLS_MIN; + return -ESRCH; +} + +static inline void clear_user_desc(struct user_desc* info) +{ + /* Postcondition: LDT_empty(info) returns true. */ + memset(info, 0, sizeof(*info)); + + /* Check the LDT_empty or the i386 sys_get_thread_area code - we obtain + * indeed an empty user_desc. + */ + info->read_exec_only = 1; + info->seg_not_present = 1; +} + +#define O_FORCE 1 + +static int load_TLS(int flags, struct task_struct *to) +{ + int ret = 0; + int idx; + + for (idx = GDT_ENTRY_TLS_MIN; idx < GDT_ENTRY_TLS_MAX; idx++) { + struct uml_tls_struct* curr = &to->thread.arch.tls_array[idx - GDT_ENTRY_TLS_MIN]; + + /* Actually, now if it wasn't flushed it gets cleared and + * flushed to the host, which will clear it.*/ + if (!curr->present) { + if (!curr->flushed) { + clear_user_desc(&curr->tls); + curr->tls.entry_number = idx; + } else { + WARN_ON(!LDT_empty(&curr->tls)); + continue; + } + } + + if (!(flags & O_FORCE) && curr->flushed) + continue; + + ret = do_set_thread_area(&curr->tls); + if (ret) + goto out; + + curr->flushed = 1; + } +out: + return ret; +} + +/* Verify if we need to do a flush for the new process, i.e. if there are any + * present desc's, only if they haven't been flushed. + */ +static inline int needs_TLS_update(struct task_struct *task) +{ + int i; + int ret = 0; + + for (i = GDT_ENTRY_TLS_MIN; i < GDT_ENTRY_TLS_MAX; i++) { + struct uml_tls_struct* curr = &task->thread.arch.tls_array[i - GDT_ENTRY_TLS_MIN]; + + /* Can't test curr->present, we may need to clear a descriptor + * which had a value. */ + if (curr->flushed) + continue; + ret = 1; + break; + } + return ret; +} + +/* On a newly forked process, the TLS descriptors haven't yet been flushed. So + * we mark them as such and the first switch_to will do the job. + */ +void clear_flushed_tls(struct task_struct *task) +{ + int i; + + for (i = GDT_ENTRY_TLS_MIN; i < GDT_ENTRY_TLS_MAX; i++) { + struct uml_tls_struct* curr = &task->thread.arch.tls_array[i - GDT_ENTRY_TLS_MIN]; + + /* Still correct to do this, if it wasn't present on the host it + * will remain as flushed as it was. */ + if (!curr->present) + continue; + + curr->flushed = 0; + } +} + +/* In SKAS0 mode, currently, multiple guest threads sharing the same ->mm have a + * common host process. So this is needed in SKAS0 too. + * + * However, if each thread had a different host process (and this was discussed + * for SMP support) this won't be needed. + * + * And this will not need be used when (and if) we'll add support to the host + * SKAS patch. */ + +int arch_switch_tls_skas(struct task_struct *from, struct task_struct *to) +{ + if (!host_supports_tls) + return 0; + + /* We have no need whatsoever to switch TLS for kernel threads; beyond + * that, that would also result in us calling os_set_thread_area with + * userspace_pid[cpu] == 0, which gives an error. */ + if (likely(to->mm)) + return load_TLS(O_FORCE, to); + + return 0; +} + +int arch_switch_tls_tt(struct task_struct *from, struct task_struct *to) +{ + if (!host_supports_tls) + return 0; + + if (needs_TLS_update(to)) + return load_TLS(0, to); + + return 0; +} + +static int set_tls_entry(struct task_struct* task, struct user_desc *info, + int idx, int flushed) +{ + struct thread_struct *t = &task->thread; + + if (idx < GDT_ENTRY_TLS_MIN || idx > GDT_ENTRY_TLS_MAX) + return -EINVAL; + + t->arch.tls_array[idx - GDT_ENTRY_TLS_MIN].tls = *info; + t->arch.tls_array[idx - GDT_ENTRY_TLS_MIN].present = 1; + t->arch.tls_array[idx - GDT_ENTRY_TLS_MIN].flushed = flushed; + + return 0; +} + +int arch_copy_tls(struct task_struct *new) +{ + struct user_desc info; + int idx, ret = -EFAULT; + + if (copy_from_user(&info, + (void __user *) UPT_ESI(&new->thread.regs.regs), + sizeof(info))) + goto out; + + ret = -EINVAL; + if (LDT_empty(&info)) + goto out; + + idx = info.entry_number; + + ret = set_tls_entry(new, &info, idx, 0); +out: + return ret; +} + +/* XXX: use do_get_thread_area to read the host value? I'm not at all sure! */ +static int get_tls_entry(struct task_struct* task, struct user_desc *info, int idx) +{ + struct thread_struct *t = &task->thread; + + if (!t->arch.tls_array) + goto clear; + + if (idx < GDT_ENTRY_TLS_MIN || idx > GDT_ENTRY_TLS_MAX) + return -EINVAL; + + if (!t->arch.tls_array[idx - GDT_ENTRY_TLS_MIN].present) + goto clear; + + *info = t->arch.tls_array[idx - GDT_ENTRY_TLS_MIN].tls; + +out: + /* Temporary debugging check, to make sure that things have been + * flushed. This could be triggered if load_TLS() failed. + */ + if (unlikely(task == current && !t->arch.tls_array[idx - GDT_ENTRY_TLS_MIN].flushed)) { + printk(KERN_ERR "get_tls_entry: task with pid %d got here " + "without flushed TLS.", current->pid); + } + + return 0; +clear: + /* When the TLS entry has not been set, the values read to user in the + * tls_array are 0 (because it's cleared at boot, see + * arch/i386/kernel/head.S:cpu_gdt_table). Emulate that. + */ + clear_user_desc(info); + info->entry_number = idx; + goto out; +} + +asmlinkage int sys_set_thread_area(struct user_desc __user *user_desc) +{ + struct user_desc info; + int idx, ret; + + if (!host_supports_tls) + return -ENOSYS; + + if (copy_from_user(&info, user_desc, sizeof(info))) + return -EFAULT; + + idx = info.entry_number; + + if (idx == -1) { + idx = get_free_idx(current); + if (idx < 0) + return idx; + info.entry_number = idx; + /* Tell the user which slot we chose for him.*/ + if (put_user(idx, &user_desc->entry_number)) + return -EFAULT; + } + + ret = CHOOSE_MODE_PROC(do_set_thread_area_tt, do_set_thread_area_skas, &info); + if (ret) + return ret; + return set_tls_entry(current, &info, idx, 1); +} + +/* + * Perform set_thread_area on behalf of the traced child. + * Note: error handling is not done on the deferred load, and this differ from + * i386. However the only possible error are caused by bugs. + */ +int ptrace_set_thread_area(struct task_struct *child, int idx, + struct user_desc __user *user_desc) +{ + struct user_desc info; + + if (!host_supports_tls) + return -EIO; + + if (copy_from_user(&info, user_desc, sizeof(info))) + return -EFAULT; + + return set_tls_entry(child, &info, idx, 0); +} + +asmlinkage int sys_get_thread_area(struct user_desc __user *user_desc) +{ + struct user_desc info; + int idx, ret; + + if (!host_supports_tls) + return -ENOSYS; + + if (get_user(idx, &user_desc->entry_number)) + return -EFAULT; + + ret = get_tls_entry(current, &info, idx); + if (ret < 0) + goto out; + + if (copy_to_user(user_desc, &info, sizeof(info))) + ret = -EFAULT; + +out: + return ret; +} + +/* + * Perform get_thread_area on behalf of the traced child. + */ +int ptrace_get_thread_area(struct task_struct *child, int idx, + struct user_desc __user *user_desc) +{ + struct user_desc info; + int ret; + + if (!host_supports_tls) + return -EIO; + + ret = get_tls_entry(child, &info, idx); + if (ret < 0) + goto out; + + if (copy_to_user(user_desc, &info, sizeof(info))) + ret = -EFAULT; +out: + return ret; +} + + +/* XXX: This part is probably common to i386 and x86-64. Don't create a common + * file for now, do that when implementing x86-64 support.*/ +static int __init __setup_host_supports_tls(void) { + check_host_supports_tls(&host_supports_tls, &host_gdt_entry_tls_min); + if (host_supports_tls) { + printk(KERN_INFO "Host TLS support detected\n"); + printk(KERN_INFO "Detected host type: "); + switch (host_gdt_entry_tls_min) { + case GDT_ENTRY_TLS_MIN_I386: + printk("i386\n"); + break; + case GDT_ENTRY_TLS_MIN_X86_64: + printk("x86_64\n"); + break; + } + } else + printk(KERN_ERR " Host TLS support NOT detected! " + "TLS support inside UML will not work\n"); + return 1; +} + +__initcall(__setup_host_supports_tls); diff --git a/trunk/arch/um/sys-x86_64/Makefile b/trunk/arch/um/sys-x86_64/Makefile index a351091fbd99..b5fc22babddf 100644 --- a/trunk/arch/um/sys-x86_64/Makefile +++ b/trunk/arch/um/sys-x86_64/Makefile @@ -4,31 +4,23 @@ # Licensed under the GPL # -#XXX: why into lib-y? -lib-y = bitops.o bugs.o csum-partial.o delay.o fault.o ldt.o mem.o memcpy.o \ - ptrace.o ptrace_user.o sigcontext.o signal.o syscalls.o \ - syscall_table.o sysrq.o thunk.o -lib-$(CONFIG_MODE_SKAS) += stub.o stub_segv.o +obj-y = bugs.o delay.o fault.o ldt.o mem.o ptrace.o ptrace_user.o \ + sigcontext.o signal.o syscalls.o syscall_table.o sysrq.o ksyms.o \ + tls.o -obj-y := ksyms.o -obj-$(CONFIG_MODULES) += module.o um_module.o +obj-$(CONFIG_MODE_SKAS) += stub.o stub_segv.o +obj-$(CONFIG_MODULES) += um_module.o -USER_OBJS := ptrace_user.o sigcontext.o stub_segv.o +subarch-obj-y = lib/bitops.o lib/csum-partial.o lib/memcpy.o lib/thunk.o +subarch-obj-$(CONFIG_MODULES) += kernel/module.o -SYMLINKS = bitops.c csum-copy.S csum-partial.c csum-wrappers.c ldt.c memcpy.S \ - thunk.S module.c +ldt-y = ../sys-i386/ldt.o -include arch/um/scripts/Makefile.rules +USER_OBJS := ptrace_user.o sigcontext.o stub_segv.o -bitops.c-dir = lib -csum-copy.S-dir = lib -csum-partial.c-dir = lib -csum-wrappers.c-dir = lib -ldt.c-dir = /arch/um/sys-i386 -memcpy.S-dir = lib -thunk.S-dir = lib -module.c-dir = kernel +include arch/um/scripts/Makefile.rules -$(obj)/stub_segv.o: _c_flags = $(call unprofile,$(CFLAGS)) +extra-$(CONFIG_MODE_TT) += unmap.o -include arch/um/scripts/Makefile.unmap +$(obj)/stub_segv.o $(obj)/unmap.o: \ + _c_flags = $(call unprofile,$(CFLAGS)) diff --git a/trunk/arch/um/sys-x86_64/tls.c b/trunk/arch/um/sys-x86_64/tls.c new file mode 100644 index 000000000000..ce1bf1b81c43 --- /dev/null +++ b/trunk/arch/um/sys-x86_64/tls.c @@ -0,0 +1,14 @@ +#include "linux/sched.h" + +void debug_arch_force_load_TLS(void) +{ +} + +void clear_flushed_tls(struct task_struct *task) +{ +} + +int arch_copy_tls(struct task_struct *t) +{ + return 0; +} diff --git a/trunk/arch/x86_64/ia32/vsyscall-sigreturn.S b/trunk/arch/x86_64/ia32/vsyscall-sigreturn.S index d90321fe9bba..1384367cdbe1 100644 --- a/trunk/arch/x86_64/ia32/vsyscall-sigreturn.S +++ b/trunk/arch/x86_64/ia32/vsyscall-sigreturn.S @@ -32,9 +32,28 @@ __kernel_rt_sigreturn: .size __kernel_rt_sigreturn,.-.LSTART_rt_sigreturn .section .eh_frame,"a",@progbits +.LSTARTFRAMES: + .long .LENDCIES-.LSTARTCIES +.LSTARTCIES: + .long 0 /* CIE ID */ + .byte 1 /* Version number */ + .string "zRS" /* NUL-terminated augmentation string */ + .uleb128 1 /* Code alignment factor */ + .sleb128 -4 /* Data alignment factor */ + .byte 8 /* Return address register column */ + .uleb128 1 /* Augmentation value length */ + .byte 0x1b /* DW_EH_PE_pcrel|DW_EH_PE_sdata4. */ + .byte 0x0c /* DW_CFA_def_cfa */ + .uleb128 4 + .uleb128 4 + .byte 0x88 /* DW_CFA_offset, column 0x8 */ + .uleb128 1 + .align 4 +.LENDCIES: + .long .LENDFDE2-.LSTARTFDE2 /* Length FDE */ .LSTARTFDE2: - .long .LSTARTFDE2-.LSTARTFRAME /* CIE pointer */ + .long .LSTARTFDE2-.LSTARTFRAMES /* CIE pointer */ /* HACK: The dwarf2 unwind routines will subtract 1 from the return address to get an address in the middle of the presumed call instruction. Since we didn't get here via @@ -97,7 +116,7 @@ __kernel_rt_sigreturn: .long .LENDFDE3-.LSTARTFDE3 /* Length FDE */ .LSTARTFDE3: - .long .LSTARTFDE3-.LSTARTFRAME /* CIE pointer */ + .long .LSTARTFDE3-.LSTARTFRAMES /* CIE pointer */ /* HACK: See above wrt unwind library assumptions. */ .long .LSTART_rt_sigreturn-1-. /* PC-relative start address */ .long .LEND_rt_sigreturn-.LSTART_rt_sigreturn+1 diff --git a/trunk/arch/x86_64/kernel/apic.c b/trunk/arch/x86_64/kernel/apic.c index d54620147e8e..100a30c40044 100644 --- a/trunk/arch/x86_64/kernel/apic.c +++ b/trunk/arch/x86_64/kernel/apic.c @@ -615,7 +615,7 @@ static int __init apic_set_verbosity(char *str) printk(KERN_WARNING "APIC Verbosity level %s not recognised" " use apic=verbose or apic=debug", str); - return 0; + return 1; } __setup("apic=", apic_set_verbosity); @@ -1137,35 +1137,35 @@ int __init APIC_init_uniprocessor (void) static __init int setup_disableapic(char *str) { disable_apic = 1; - return 0; + return 1; } static __init int setup_nolapic(char *str) { disable_apic = 1; - return 0; + return 1; } static __init int setup_noapictimer(char *str) { if (str[0] != ' ' && str[0] != 0) - return -1; + return 0; disable_apic_timer = 1; - return 0; + return 1; } static __init int setup_apicmaintimer(char *str) { apic_runs_main_timer = 1; nohpet = 1; - return 0; + return 1; } __setup("apicmaintimer", setup_apicmaintimer); static __init int setup_noapicmaintimer(char *str) { apic_runs_main_timer = -1; - return 0; + return 1; } __setup("noapicmaintimer", setup_noapicmaintimer); diff --git a/trunk/arch/x86_64/kernel/early_printk.c b/trunk/arch/x86_64/kernel/early_printk.c index 13af920b6594..b93ef5b51980 100644 --- a/trunk/arch/x86_64/kernel/early_printk.c +++ b/trunk/arch/x86_64/kernel/early_printk.c @@ -221,7 +221,7 @@ int __init setup_early_printk(char *opt) char buf[256]; if (early_console_initialized) - return -1; + return 1; strlcpy(buf,opt,sizeof(buf)); space = strchr(buf, ' '); diff --git a/trunk/arch/x86_64/kernel/mce.c b/trunk/arch/x86_64/kernel/mce.c index 04282ef9fbd4..10b3e348fc99 100644 --- a/trunk/arch/x86_64/kernel/mce.c +++ b/trunk/arch/x86_64/kernel/mce.c @@ -501,7 +501,7 @@ static struct miscdevice mce_log_device = { static int __init mcheck_disable(char *str) { mce_dont_init = 1; - return 0; + return 1; } /* mce=off disables machine check. Note you can reenable it later @@ -521,7 +521,7 @@ static int __init mcheck_enable(char *str) get_option(&str, &tolerant); else printk("mce= argument %s ignored. Please use /sys", str); - return 0; + return 1; } __setup("nomce", mcheck_disable); diff --git a/trunk/arch/x86_64/kernel/pmtimer.c b/trunk/arch/x86_64/kernel/pmtimer.c index ee5ee4891f3d..b0444a415bd6 100644 --- a/trunk/arch/x86_64/kernel/pmtimer.c +++ b/trunk/arch/x86_64/kernel/pmtimer.c @@ -121,7 +121,7 @@ unsigned int do_gettimeoffset_pm(void) static int __init nopmtimer_setup(char *s) { pmtmr_ioport = 0; - return 0; + return 1; } __setup("nopmtimer", nopmtimer_setup); diff --git a/trunk/arch/x86_64/kernel/setup.c b/trunk/arch/x86_64/kernel/setup.c index d1f3e9272c05..0856ad444f90 100644 --- a/trunk/arch/x86_64/kernel/setup.c +++ b/trunk/arch/x86_64/kernel/setup.c @@ -540,7 +540,7 @@ void __init alternative_instructions(void) static int __init noreplacement_setup(char *s) { no_replacement = 1; - return 0; + return 1; } __setup("noreplacement", noreplacement_setup); diff --git a/trunk/arch/x86_64/kernel/setup64.c b/trunk/arch/x86_64/kernel/setup64.c index eabdb63fec31..8a691fa6d393 100644 --- a/trunk/arch/x86_64/kernel/setup64.c +++ b/trunk/arch/x86_64/kernel/setup64.c @@ -55,7 +55,7 @@ int __init nonx_setup(char *str) do_not_nx = 1; __supported_pte_mask &= ~_PAGE_NX; } - return 0; + return 1; } __setup("noexec=", nonx_setup); /* parsed early actually */ @@ -74,7 +74,7 @@ static int __init nonx32_setup(char *str) force_personality32 &= ~READ_IMPLIES_EXEC; else if (!strcmp(str, "off")) force_personality32 |= READ_IMPLIES_EXEC; - return 0; + return 1; } __setup("noexec32=", nonx32_setup); diff --git a/trunk/arch/x86_64/kernel/smpboot.c b/trunk/arch/x86_64/kernel/smpboot.c index ea48fa638070..71a7222cf9ce 100644 --- a/trunk/arch/x86_64/kernel/smpboot.c +++ b/trunk/arch/x86_64/kernel/smpboot.c @@ -353,7 +353,7 @@ static void __cpuinit tsc_sync_wait(void) static __init int notscsync_setup(char *s) { notscsync = 1; - return 0; + return 1; } __setup("notscsync", notscsync_setup); diff --git a/trunk/arch/x86_64/kernel/time.c b/trunk/arch/x86_64/kernel/time.c index 473b514b66e4..ef8bc46dc140 100644 --- a/trunk/arch/x86_64/kernel/time.c +++ b/trunk/arch/x86_64/kernel/time.c @@ -1306,7 +1306,7 @@ irqreturn_t hpet_rtc_interrupt(int irq, void *dev_id, struct pt_regs *regs) static int __init nohpet_setup(char *s) { nohpet = 1; - return 0; + return 1; } __setup("nohpet", nohpet_setup); @@ -1314,7 +1314,7 @@ __setup("nohpet", nohpet_setup); int __init notsc_setup(char *s) { notsc = 1; - return 0; + return 1; } __setup("notsc", notsc_setup); diff --git a/trunk/arch/x86_64/kernel/traps.c b/trunk/arch/x86_64/kernel/traps.c index edaa9fe654dc..6bda322d3caf 100644 --- a/trunk/arch/x86_64/kernel/traps.c +++ b/trunk/arch/x86_64/kernel/traps.c @@ -973,14 +973,14 @@ void __init trap_init(void) static int __init oops_dummy(char *s) { panic_on_oops = 1; - return -1; + return 1; } __setup("oops=", oops_dummy); static int __init kstack_setup(char *s) { kstack_depth_to_print = simple_strtoul(s,NULL,0); - return 0; + return 1; } __setup("kstack=", kstack_setup); diff --git a/trunk/arch/x86_64/kernel/x8664_ksyms.c b/trunk/arch/x86_64/kernel/x8664_ksyms.c index d96a9348e5a2..d78f46056bda 100644 --- a/trunk/arch/x86_64/kernel/x8664_ksyms.c +++ b/trunk/arch/x86_64/kernel/x8664_ksyms.c @@ -102,8 +102,6 @@ EXPORT_SYMBOL(cpu_callout_map); EXPORT_SYMBOL(screen_info); #endif -EXPORT_SYMBOL(get_wchan); - EXPORT_SYMBOL(rtc_lock); EXPORT_SYMBOL_GPL(set_nmi_callback); diff --git a/trunk/arch/x86_64/mm/fault.c b/trunk/arch/x86_64/mm/fault.c index 316c53de47bd..55250593d8c9 100644 --- a/trunk/arch/x86_64/mm/fault.c +++ b/trunk/arch/x86_64/mm/fault.c @@ -623,6 +623,6 @@ void vmalloc_sync_all(void) static int __init enable_pagefaulttrace(char *str) { page_fault_trace = 1; - return 0; + return 1; } __setup("pagefaulttrace", enable_pagefaulttrace); diff --git a/trunk/arch/xtensa/kernel/xtensa_ksyms.c b/trunk/arch/xtensa/kernel/xtensa_ksyms.c index efae56a51475..152b9370789b 100644 --- a/trunk/arch/xtensa/kernel/xtensa_ksyms.c +++ b/trunk/arch/xtensa/kernel/xtensa_ksyms.c @@ -113,8 +113,6 @@ EXPORT_SYMBOL(__xtensa_copy_user); // FIXME EXPORT_SYMBOL(screen_info); #endif -EXPORT_SYMBOL(get_wchan); - EXPORT_SYMBOL(outsb); EXPORT_SYMBOL(outsw); EXPORT_SYMBOL(outsl); diff --git a/trunk/block/Kconfig b/trunk/block/Kconfig index 5536839886ff..b6f5f0a79655 100644 --- a/trunk/block/Kconfig +++ b/trunk/block/Kconfig @@ -27,10 +27,10 @@ config BLK_DEV_IO_TRACE config LSF bool "Support for Large Single Files" depends on X86 || (MIPS && 32BIT) || PPC32 || ARCH_S390_31 || SUPERH || UML - default n help - When CONFIG_LBD is disabled, say Y here if you want to - handle large file(bigger than 2TB), otherwise say N. - When CONFIG_LBD is enabled, Y is set automatically. + Say Y here if you want to be able to handle very large files (bigger + than 2TB), otherwise say N. + + If unsure, say Y. source block/Kconfig.iosched diff --git a/trunk/block/elevator.c b/trunk/block/elevator.c index 56c2ed06a9e2..0d6be03d929e 100644 --- a/trunk/block/elevator.c +++ b/trunk/block/elevator.c @@ -145,7 +145,7 @@ static int __init elevator_setup(char *str) strcpy(chosen_elevator, "anticipatory"); else strncpy(chosen_elevator, str, sizeof(chosen_elevator) - 1); - return 0; + return 1; } __setup("elevator=", elevator_setup); diff --git a/trunk/block/genhd.c b/trunk/block/genhd.c index db4c60c802d6..5a8d3bf02f17 100644 --- a/trunk/block/genhd.c +++ b/trunk/block/genhd.c @@ -17,8 +17,6 @@ #include #include -#define MAX_PROBE_HASH 255 /* random */ - static struct subsystem block_subsys; static DEFINE_MUTEX(block_subsys_lock); @@ -31,108 +29,29 @@ static struct blk_major_name { struct blk_major_name *next; int major; char name[16]; -} *major_names[MAX_PROBE_HASH]; +} *major_names[BLKDEV_MAJOR_HASH_SIZE]; /* index in the above - for now: assume no multimajor ranges */ static inline int major_to_index(int major) { - return major % MAX_PROBE_HASH; -} - -struct blkdev_info { - int index; - struct blk_major_name *bd; -}; - -/* - * iterate over a list of blkdev_info structures. allows - * the major_names array to be iterated over from outside this file - * must be called with the block_subsys_lock held - */ -void *get_next_blkdev(void *dev) -{ - struct blkdev_info *info; - - if (dev == NULL) { - info = kmalloc(sizeof(*info), GFP_KERNEL); - if (!info) - goto out; - info->index=0; - info->bd = major_names[info->index]; - if (info->bd) - goto out; - } else { - info = dev; - } - - while (info->index < ARRAY_SIZE(major_names)) { - if (info->bd) - info->bd = info->bd->next; - if (info->bd) - goto out; - /* - * No devices on this chain, move to the next - */ - info->index++; - info->bd = (info->index < ARRAY_SIZE(major_names)) ? - major_names[info->index] : NULL; - if (info->bd) - goto out; - } - -out: - return info; -} - -void *acquire_blkdev_list(void) -{ - mutex_lock(&block_subsys_lock); - return get_next_blkdev(NULL); -} - -void release_blkdev_list(void *dev) -{ - mutex_unlock(&block_subsys_lock); - kfree(dev); + return major % BLKDEV_MAJOR_HASH_SIZE; } +#ifdef CONFIG_PROC_FS -/* - * Count the number of records in the blkdev_list. - * must be called with the block_subsys_lock held - */ -int count_blkdev_list(void) +void blkdev_show(struct seq_file *f, off_t offset) { - struct blk_major_name *n; - int i, count; + struct blk_major_name *dp; - count = 0; - - for (i = 0; i < ARRAY_SIZE(major_names); i++) { - for (n = major_names[i]; n; n = n->next) - count++; + if (offset < BLKDEV_MAJOR_HASH_SIZE) { + mutex_lock(&block_subsys_lock); + for (dp = major_names[offset]; dp; dp = dp->next) + seq_printf(f, "%3d %s\n", dp->major, dp->name); + mutex_unlock(&block_subsys_lock); } - - return count; -} - -/* - * extract the major and name values from a blkdev_info struct - * passed in as a void to *dev. Must be called with - * block_subsys_lock held - */ -int get_blkdev_info(void *dev, int *major, char **name) -{ - struct blkdev_info *info = dev; - - if (info->bd == NULL) - return 1; - - *major = info->bd->major; - *name = info->bd->name; - return 0; } +#endif /* CONFIG_PROC_FS */ int register_blkdev(unsigned int major, const char *name) { diff --git a/trunk/drivers/Kconfig b/trunk/drivers/Kconfig index 9f5c0da57c90..5c91d6afb117 100644 --- a/trunk/drivers/Kconfig +++ b/trunk/drivers/Kconfig @@ -64,6 +64,8 @@ source "drivers/usb/Kconfig" source "drivers/mmc/Kconfig" +source "drivers/leds/Kconfig" + source "drivers/infiniband/Kconfig" source "drivers/sn/Kconfig" diff --git a/trunk/drivers/Makefile b/trunk/drivers/Makefile index 424955274e60..55205c8d2318 100644 --- a/trunk/drivers/Makefile +++ b/trunk/drivers/Makefile @@ -25,9 +25,6 @@ obj-$(CONFIG_CONNECTOR) += connector/ obj-$(CONFIG_FB_I810) += video/i810/ obj-$(CONFIG_FB_INTEL) += video/intelfb/ -# we also need input/serio early so serio bus is initialized by the time -# serial drivers start registering their serio ports -obj-$(CONFIG_SERIO) += input/serio/ obj-y += serial/ obj-$(CONFIG_PARPORT) += parport/ obj-y += base/ block/ misc/ mfd/ net/ media/ @@ -53,6 +50,7 @@ obj-$(CONFIG_TC) += tc/ obj-$(CONFIG_USB) += usb/ obj-$(CONFIG_PCI) += usb/ obj-$(CONFIG_USB_GADGET) += usb/gadget/ +obj-$(CONFIG_SERIO) += input/serio/ obj-$(CONFIG_GAMEPORT) += input/gameport/ obj-$(CONFIG_INPUT) += input/ obj-$(CONFIG_I2O) += message/ @@ -69,6 +67,7 @@ obj-$(CONFIG_MCA) += mca/ obj-$(CONFIG_EISA) += eisa/ obj-$(CONFIG_CPU_FREQ) += cpufreq/ obj-$(CONFIG_MMC) += mmc/ +obj-$(CONFIG_NEW_LEDS) += leds/ obj-$(CONFIG_INFINIBAND) += infiniband/ obj-$(CONFIG_SGI_SN) += sn/ obj-y += firmware/ diff --git a/trunk/drivers/acpi/ec.c b/trunk/drivers/acpi/ec.c index 79b09d76c180..eee0864ba300 100644 --- a/trunk/drivers/acpi/ec.c +++ b/trunk/drivers/acpi/ec.c @@ -1572,7 +1572,7 @@ static void __exit acpi_ec_exit(void) static int __init acpi_fake_ecdt_setup(char *str) { acpi_fake_ecdt_enabled = 1; - return 0; + return 1; } __setup("acpi_fake_ecdt", acpi_fake_ecdt_setup); @@ -1591,7 +1591,7 @@ static int __init acpi_ec_set_intr_mode(char *str) acpi_ec_driver.ops.add = acpi_ec_poll_add; } printk(KERN_INFO PREFIX "EC %s mode.\n", intr ? "interrupt" : "polling"); - return 0; + return 1; } __setup("ec_intr=", acpi_ec_set_intr_mode); diff --git a/trunk/drivers/block/amiflop.c b/trunk/drivers/block/amiflop.c index b6e290956214..2a8af685926f 100644 --- a/trunk/drivers/block/amiflop.c +++ b/trunk/drivers/block/amiflop.c @@ -1850,6 +1850,7 @@ static int __init amiga_floppy_setup (char *str) return 0; printk (KERN_INFO "amiflop: Setting default df0 to %x\n", n); fd_def_df0 = n; + return 1; } __setup("floppy=", amiga_floppy_setup); diff --git a/trunk/drivers/char/hvcs.c b/trunk/drivers/char/hvcs.c index 327b00c3c45e..8d97b3911293 100644 --- a/trunk/drivers/char/hvcs.c +++ b/trunk/drivers/char/hvcs.c @@ -904,7 +904,7 @@ static int hvcs_enable_device(struct hvcs_struct *hvcsd, uint32_t unit_address, * It is possible the vty-server was removed after the irq was * requested but before we have time to enable interrupts. */ - if (vio_enable_interrupts(vdev) == H_Success) + if (vio_enable_interrupts(vdev) == H_SUCCESS) return 0; else { printk(KERN_ERR "HVCS: int enable failed for" diff --git a/trunk/drivers/char/ipmi/ipmi_devintf.c b/trunk/drivers/char/ipmi/ipmi_devintf.c index 932feedda262..e1c95374984c 100644 --- a/trunk/drivers/char/ipmi/ipmi_devintf.c +++ b/trunk/drivers/char/ipmi/ipmi_devintf.c @@ -42,7 +42,7 @@ #include #include #include -#include +#include #include #include #include @@ -55,7 +55,7 @@ struct ipmi_file_private struct file *file; struct fasync_struct *fasync_queue; wait_queue_head_t wait; - struct semaphore recv_sem; + struct mutex recv_mutex; int default_retries; unsigned int default_retry_time_ms; }; @@ -141,7 +141,7 @@ static int ipmi_open(struct inode *inode, struct file *file) INIT_LIST_HEAD(&(priv->recv_msgs)); init_waitqueue_head(&priv->wait); priv->fasync_queue = NULL; - sema_init(&(priv->recv_sem), 1); + mutex_init(&priv->recv_mutex); /* Use the low-level defaults. */ priv->default_retries = -1; @@ -285,15 +285,15 @@ static int ipmi_ioctl(struct inode *inode, break; } - /* We claim a semaphore because we don't want two + /* We claim a mutex because we don't want two users getting something from the queue at a time. Since we have to release the spinlock before we can copy the data to the user, it's possible another user will grab something from the queue, too. Then the messages might get out of order if something fails and the message gets put back onto the - queue. This semaphore prevents that problem. */ - down(&(priv->recv_sem)); + queue. This mutex prevents that problem. */ + mutex_lock(&priv->recv_mutex); /* Grab the message off the list. */ spin_lock_irqsave(&(priv->recv_msg_lock), flags); @@ -352,7 +352,7 @@ static int ipmi_ioctl(struct inode *inode, goto recv_putback_on_err; } - up(&(priv->recv_sem)); + mutex_unlock(&priv->recv_mutex); ipmi_free_recv_msg(msg); break; @@ -362,11 +362,11 @@ static int ipmi_ioctl(struct inode *inode, spin_lock_irqsave(&(priv->recv_msg_lock), flags); list_add(entry, &(priv->recv_msgs)); spin_unlock_irqrestore(&(priv->recv_msg_lock), flags); - up(&(priv->recv_sem)); + mutex_unlock(&priv->recv_mutex); break; recv_err: - up(&(priv->recv_sem)); + mutex_unlock(&priv->recv_mutex); break; } diff --git a/trunk/drivers/char/ipmi/ipmi_kcs_sm.c b/trunk/drivers/char/ipmi/ipmi_kcs_sm.c index da1554194d3d..2062675f9e99 100644 --- a/trunk/drivers/char/ipmi/ipmi_kcs_sm.c +++ b/trunk/drivers/char/ipmi/ipmi_kcs_sm.c @@ -227,7 +227,7 @@ static inline int check_ibf(struct si_sm_data *kcs, unsigned char status, static inline int check_obf(struct si_sm_data *kcs, unsigned char status, long time) { - if (! GET_STATUS_OBF(status)) { + if (!GET_STATUS_OBF(status)) { kcs->obf_timeout -= time; if (kcs->obf_timeout < 0) { start_error_recovery(kcs, "OBF not ready in time"); @@ -407,7 +407,7 @@ static enum si_sm_result kcs_event(struct si_sm_data *kcs, long time) } if (state == KCS_READ_STATE) { - if (! check_obf(kcs, status, time)) + if (!check_obf(kcs, status, time)) return SI_SM_CALL_WITH_DELAY; read_next_byte(kcs); } else { @@ -447,7 +447,7 @@ static enum si_sm_result kcs_event(struct si_sm_data *kcs, long time) "Not in read state for error2"); break; } - if (! check_obf(kcs, status, time)) + if (!check_obf(kcs, status, time)) return SI_SM_CALL_WITH_DELAY; clear_obf(kcs, status); @@ -462,7 +462,7 @@ static enum si_sm_result kcs_event(struct si_sm_data *kcs, long time) break; } - if (! check_obf(kcs, status, time)) + if (!check_obf(kcs, status, time)) return SI_SM_CALL_WITH_DELAY; clear_obf(kcs, status); diff --git a/trunk/drivers/char/ipmi/ipmi_msghandler.c b/trunk/drivers/char/ipmi/ipmi_msghandler.c index 40eb005b9d77..0ded046d5aa8 100644 --- a/trunk/drivers/char/ipmi/ipmi_msghandler.c +++ b/trunk/drivers/char/ipmi/ipmi_msghandler.c @@ -38,6 +38,7 @@ #include #include #include +#include #include #include #include @@ -234,7 +235,7 @@ struct ipmi_smi /* The list of command receivers that are registered for commands on this interface. */ - struct semaphore cmd_rcvrs_lock; + struct mutex cmd_rcvrs_mutex; struct list_head cmd_rcvrs; /* Events that were queues because no one was there to receive @@ -387,10 +388,10 @@ static void clean_up_interface_data(ipmi_smi_t intf) /* Wholesale remove all the entries from the list in the * interface and wait for RCU to know that none are in use. */ - down(&intf->cmd_rcvrs_lock); + mutex_lock(&intf->cmd_rcvrs_mutex); list_add_rcu(&list, &intf->cmd_rcvrs); list_del_rcu(&intf->cmd_rcvrs); - up(&intf->cmd_rcvrs_lock); + mutex_unlock(&intf->cmd_rcvrs_mutex); synchronize_rcu(); list_for_each_entry_safe(rcvr, rcvr2, &list, link) @@ -557,7 +558,7 @@ unsigned int ipmi_addr_length(int addr_type) static void deliver_response(struct ipmi_recv_msg *msg) { - if (! msg->user) { + if (!msg->user) { ipmi_smi_t intf = msg->user_msg_data; unsigned long flags; @@ -598,11 +599,11 @@ static int intf_next_seq(ipmi_smi_t intf, (i+1)%IPMI_IPMB_NUM_SEQ != intf->curr_seq; i = (i+1)%IPMI_IPMB_NUM_SEQ) { - if (! intf->seq_table[i].inuse) + if (!intf->seq_table[i].inuse) break; } - if (! intf->seq_table[i].inuse) { + if (!intf->seq_table[i].inuse) { intf->seq_table[i].recv_msg = recv_msg; /* Start with the maximum timeout, when the send response @@ -763,7 +764,7 @@ int ipmi_create_user(unsigned int if_num, } new_user = kmalloc(sizeof(*new_user), GFP_KERNEL); - if (! new_user) + if (!new_user) return -ENOMEM; spin_lock_irqsave(&interfaces_lock, flags); @@ -819,14 +820,13 @@ static void free_user(struct kref *ref) int ipmi_destroy_user(ipmi_user_t user) { - int rv = -ENODEV; ipmi_smi_t intf = user->intf; int i; unsigned long flags; struct cmd_rcvr *rcvr; struct cmd_rcvr *rcvrs = NULL; - user->valid = 1; + user->valid = 0; /* Remove the user from the interface's sequence table. */ spin_lock_irqsave(&intf->seq_lock, flags); @@ -847,7 +847,7 @@ int ipmi_destroy_user(ipmi_user_t user) * since other things may be using it till we do * synchronize_rcu()) then free everything in that list. */ - down(&intf->cmd_rcvrs_lock); + mutex_lock(&intf->cmd_rcvrs_mutex); list_for_each_entry_rcu(rcvr, &intf->cmd_rcvrs, link) { if (rcvr->user == user) { list_del_rcu(&rcvr->link); @@ -855,7 +855,7 @@ int ipmi_destroy_user(ipmi_user_t user) rcvrs = rcvr; } } - up(&intf->cmd_rcvrs_lock); + mutex_unlock(&intf->cmd_rcvrs_mutex); synchronize_rcu(); while (rcvrs) { rcvr = rcvrs; @@ -871,7 +871,7 @@ int ipmi_destroy_user(ipmi_user_t user) kref_put(&user->refcount, free_user); - return rv; + return 0; } void ipmi_get_version(ipmi_user_t user, @@ -936,7 +936,8 @@ int ipmi_set_gets_events(ipmi_user_t user, int val) if (val) { /* Deliver any queued events. */ - list_for_each_entry_safe(msg, msg2, &intf->waiting_events, link) { + list_for_each_entry_safe(msg, msg2, &intf->waiting_events, + link) { list_del(&msg->link); list_add_tail(&msg->link, &msgs); } @@ -978,13 +979,13 @@ int ipmi_register_for_cmd(ipmi_user_t user, rcvr = kmalloc(sizeof(*rcvr), GFP_KERNEL); - if (! rcvr) + if (!rcvr) return -ENOMEM; rcvr->cmd = cmd; rcvr->netfn = netfn; rcvr->user = user; - down(&intf->cmd_rcvrs_lock); + mutex_lock(&intf->cmd_rcvrs_mutex); /* Make sure the command/netfn is not already registered. */ entry = find_cmd_rcvr(intf, netfn, cmd); if (entry) { @@ -995,7 +996,7 @@ int ipmi_register_for_cmd(ipmi_user_t user, list_add_rcu(&rcvr->link, &intf->cmd_rcvrs); out_unlock: - up(&intf->cmd_rcvrs_lock); + mutex_unlock(&intf->cmd_rcvrs_mutex); if (rv) kfree(rcvr); @@ -1009,17 +1010,17 @@ int ipmi_unregister_for_cmd(ipmi_user_t user, ipmi_smi_t intf = user->intf; struct cmd_rcvr *rcvr; - down(&intf->cmd_rcvrs_lock); + mutex_lock(&intf->cmd_rcvrs_mutex); /* Make sure the command/netfn is not already registered. */ rcvr = find_cmd_rcvr(intf, netfn, cmd); if ((rcvr) && (rcvr->user == user)) { list_del_rcu(&rcvr->link); - up(&intf->cmd_rcvrs_lock); + mutex_unlock(&intf->cmd_rcvrs_mutex); synchronize_rcu(); kfree(rcvr); return 0; } else { - up(&intf->cmd_rcvrs_lock); + mutex_unlock(&intf->cmd_rcvrs_mutex); return -ENOENT; } } @@ -1514,7 +1515,7 @@ int ipmi_request_settime(ipmi_user_t user, unsigned char saddr, lun; int rv; - if (! user) + if (!user) return -EINVAL; rv = check_addr(user->intf, addr, &saddr, &lun); if (rv) @@ -1545,7 +1546,7 @@ int ipmi_request_supply_msgs(ipmi_user_t user, unsigned char saddr, lun; int rv; - if (! user) + if (!user) return -EINVAL; rv = check_addr(user->intf, addr, &saddr, &lun); if (rv) @@ -1570,7 +1571,7 @@ static int ipmb_file_read_proc(char *page, char **start, off_t off, char *out = (char *) page; ipmi_smi_t intf = data; int i; - int rv= 0; + int rv = 0; for (i = 0; i < IPMI_MAX_CHANNELS; i++) rv += sprintf(out+rv, "%x ", intf->channels[i].address); @@ -1989,7 +1990,7 @@ static int ipmi_bmc_register(ipmi_smi_t intf) } else { bmc->dev = platform_device_alloc("ipmi_bmc", bmc->id.device_id); - if (! bmc->dev) { + if (!bmc->dev) { printk(KERN_ERR "ipmi_msghandler:" " Unable to allocate platform device\n"); @@ -2305,8 +2306,7 @@ int ipmi_register_smi(struct ipmi_smi_handlers *handlers, void *send_info, struct ipmi_device_id *device_id, struct device *si_dev, - unsigned char slave_addr, - ipmi_smi_t *new_intf) + unsigned char slave_addr) { int i, j; int rv; @@ -2366,7 +2366,7 @@ int ipmi_register_smi(struct ipmi_smi_handlers *handlers, spin_lock_init(&intf->events_lock); INIT_LIST_HEAD(&intf->waiting_events); intf->waiting_events_count = 0; - init_MUTEX(&intf->cmd_rcvrs_lock); + mutex_init(&intf->cmd_rcvrs_mutex); INIT_LIST_HEAD(&intf->cmd_rcvrs); init_waitqueue_head(&intf->waitq); @@ -2388,9 +2388,9 @@ int ipmi_register_smi(struct ipmi_smi_handlers *handlers, if (rv) goto out; - /* FIXME - this is an ugly kludge, this sets the intf for the - caller before sending any messages with it. */ - *new_intf = intf; + rv = handlers->start_processing(send_info, intf); + if (rv) + goto out; get_guid(intf); @@ -2622,7 +2622,7 @@ static int handle_ipmb_get_msg_cmd(ipmi_smi_t intf, spin_unlock_irqrestore(&intf->counter_lock, flags); recv_msg = ipmi_alloc_recv_msg(); - if (! recv_msg) { + if (!recv_msg) { /* We couldn't allocate memory for the message, so requeue it for handling later. */ @@ -2777,7 +2777,7 @@ static int handle_lan_get_msg_cmd(ipmi_smi_t intf, spin_unlock_irqrestore(&intf->counter_lock, flags); recv_msg = ipmi_alloc_recv_msg(); - if (! recv_msg) { + if (!recv_msg) { /* We couldn't allocate memory for the message, so requeue it for handling later. */ @@ -2869,13 +2869,14 @@ static int handle_read_event_rsp(ipmi_smi_t intf, events. */ rcu_read_lock(); list_for_each_entry_rcu(user, &intf->users, link) { - if (! user->gets_events) + if (!user->gets_events) continue; recv_msg = ipmi_alloc_recv_msg(); - if (! recv_msg) { + if (!recv_msg) { rcu_read_unlock(); - list_for_each_entry_safe(recv_msg, recv_msg2, &msgs, link) { + list_for_each_entry_safe(recv_msg, recv_msg2, &msgs, + link) { list_del(&recv_msg->link); ipmi_free_recv_msg(recv_msg); } @@ -2905,7 +2906,7 @@ static int handle_read_event_rsp(ipmi_smi_t intf, /* No one to receive the message, put it in queue if there's not already too many things in the queue. */ recv_msg = ipmi_alloc_recv_msg(); - if (! recv_msg) { + if (!recv_msg) { /* We couldn't allocate memory for the message, so requeue it for handling later. */ @@ -3190,7 +3191,7 @@ void ipmi_smi_watchdog_pretimeout(ipmi_smi_t intf) rcu_read_lock(); list_for_each_entry_rcu(user, &intf->users, link) { - if (! user->handler->ipmi_watchdog_pretimeout) + if (!user->handler->ipmi_watchdog_pretimeout) continue; user->handler->ipmi_watchdog_pretimeout(user->handler_data); @@ -3278,7 +3279,7 @@ static void check_msg_timeout(ipmi_smi_t intf, struct seq_table *ent, smi_msg = smi_from_recv_msg(intf, ent->recv_msg, slot, ent->seqid); - if (! smi_msg) + if (!smi_msg) return; spin_unlock_irqrestore(&intf->seq_lock, *flags); @@ -3314,8 +3315,9 @@ static void ipmi_timeout_handler(long timeout_period) /* See if any waiting messages need to be processed. */ spin_lock_irqsave(&intf->waiting_msgs_lock, flags); - list_for_each_entry_safe(smi_msg, smi_msg2, &intf->waiting_msgs, link) { - if (! handle_new_recv_msg(intf, smi_msg)) { + list_for_each_entry_safe(smi_msg, smi_msg2, + &intf->waiting_msgs, link) { + if (!handle_new_recv_msg(intf, smi_msg)) { list_del(&smi_msg->link); ipmi_free_smi_msg(smi_msg); } else { diff --git a/trunk/drivers/char/ipmi/ipmi_poweroff.c b/trunk/drivers/char/ipmi/ipmi_poweroff.c index 786a2802ca34..d0b5c08e7b4e 100644 --- a/trunk/drivers/char/ipmi/ipmi_poweroff.c +++ b/trunk/drivers/char/ipmi/ipmi_poweroff.c @@ -346,7 +346,7 @@ static int ipmi_dell_chassis_detect (ipmi_user_t user) { const char ipmi_version_major = ipmi_version & 0xF; const char ipmi_version_minor = (ipmi_version >> 4) & 0xF; - const char mfr[3]=DELL_IANA_MFR_ID; + const char mfr[3] = DELL_IANA_MFR_ID; if (!memcmp(mfr, &mfg_id, sizeof(mfr)) && ipmi_version_major <= 1 && ipmi_version_minor < 5) diff --git a/trunk/drivers/char/ipmi/ipmi_si_intf.c b/trunk/drivers/char/ipmi/ipmi_si_intf.c index 35fbd4d8ed4b..a86c0f29953e 100644 --- a/trunk/drivers/char/ipmi/ipmi_si_intf.c +++ b/trunk/drivers/char/ipmi/ipmi_si_intf.c @@ -803,7 +803,7 @@ static int ipmi_thread(void *data) set_user_nice(current, 19); while (!kthread_should_stop()) { spin_lock_irqsave(&(smi_info->si_lock), flags); - smi_result=smi_event_handler(smi_info, 0); + smi_result = smi_event_handler(smi_info, 0); spin_unlock_irqrestore(&(smi_info->si_lock), flags); if (smi_result == SI_SM_CALL_WITHOUT_DELAY) { /* do nothing */ @@ -972,10 +972,37 @@ static irqreturn_t si_bt_irq_handler(int irq, void *data, struct pt_regs *regs) return si_irq_handler(irq, data, regs); } +static int smi_start_processing(void *send_info, + ipmi_smi_t intf) +{ + struct smi_info *new_smi = send_info; + + new_smi->intf = intf; + + /* Set up the timer that drives the interface. */ + setup_timer(&new_smi->si_timer, smi_timeout, (long)new_smi); + new_smi->last_timeout_jiffies = jiffies; + mod_timer(&new_smi->si_timer, jiffies + SI_TIMEOUT_JIFFIES); + + if (new_smi->si_type != SI_BT) { + new_smi->thread = kthread_run(ipmi_thread, new_smi, + "kipmi%d", new_smi->intf_num); + if (IS_ERR(new_smi->thread)) { + printk(KERN_NOTICE "ipmi_si_intf: Could not start" + " kernel thread due to error %ld, only using" + " timers to drive the interface\n", + PTR_ERR(new_smi->thread)); + new_smi->thread = NULL; + } + } + + return 0; +} static struct ipmi_smi_handlers handlers = { .owner = THIS_MODULE, + .start_processing = smi_start_processing, .sender = sender, .request_events = request_events, .set_run_to_completion = set_run_to_completion, @@ -987,7 +1014,7 @@ static struct ipmi_smi_handlers handlers = #define SI_MAX_PARMS 4 static LIST_HEAD(smi_infos); -static DECLARE_MUTEX(smi_infos_lock); +static DEFINE_MUTEX(smi_infos_lock); static int smi_num; /* Used to sequence the SMIs */ #define DEFAULT_REGSPACING 1 @@ -2162,9 +2189,13 @@ static void setup_xaction_handlers(struct smi_info *smi_info) static inline void wait_for_timer_and_thread(struct smi_info *smi_info) { - if (smi_info->thread != NULL && smi_info->thread != ERR_PTR(-ENOMEM)) - kthread_stop(smi_info->thread); - del_timer_sync(&smi_info->si_timer); + if (smi_info->intf) { + /* The timer and thread are only running if the + interface has been started up and registered. */ + if (smi_info->thread != NULL) + kthread_stop(smi_info->thread); + del_timer_sync(&smi_info->si_timer); + } } static struct ipmi_default_vals @@ -2245,7 +2276,7 @@ static int try_smi_init(struct smi_info *new_smi) new_smi->slave_addr, new_smi->irq); } - down(&smi_infos_lock); + mutex_lock(&smi_infos_lock); if (!is_new_interface(new_smi)) { printk(KERN_WARNING "ipmi_si: duplicate interface\n"); rv = -EBUSY; @@ -2341,21 +2372,6 @@ static int try_smi_init(struct smi_info *new_smi) if (new_smi->irq) new_smi->si_state = SI_CLEARING_FLAGS_THEN_SET_IRQ; - /* The ipmi_register_smi() code does some operations to - determine the channel information, so we must be ready to - handle operations before it is called. This means we have - to stop the timer if we get an error after this point. */ - init_timer(&(new_smi->si_timer)); - new_smi->si_timer.data = (long) new_smi; - new_smi->si_timer.function = smi_timeout; - new_smi->last_timeout_jiffies = jiffies; - new_smi->si_timer.expires = jiffies + SI_TIMEOUT_JIFFIES; - - add_timer(&(new_smi->si_timer)); - if (new_smi->si_type != SI_BT) - new_smi->thread = kthread_run(ipmi_thread, new_smi, - "kipmi%d", new_smi->intf_num); - if (!new_smi->dev) { /* If we don't already have a device from something * else (like PCI), then register a new one. */ @@ -2365,7 +2381,7 @@ static int try_smi_init(struct smi_info *new_smi) printk(KERN_ERR "ipmi_si_intf:" " Unable to allocate platform device\n"); - goto out_err_stop_timer; + goto out_err; } new_smi->dev = &new_smi->pdev->dev; new_smi->dev->driver = &ipmi_driver; @@ -2377,7 +2393,7 @@ static int try_smi_init(struct smi_info *new_smi) " Unable to register system interface device:" " %d\n", rv); - goto out_err_stop_timer; + goto out_err; } new_smi->dev_registered = 1; } @@ -2386,8 +2402,7 @@ static int try_smi_init(struct smi_info *new_smi) new_smi, &new_smi->device_id, new_smi->dev, - new_smi->slave_addr, - &(new_smi->intf)); + new_smi->slave_addr); if (rv) { printk(KERN_ERR "ipmi_si: Unable to register device: error %d\n", @@ -2417,7 +2432,7 @@ static int try_smi_init(struct smi_info *new_smi) list_add_tail(&new_smi->link, &smi_infos); - up(&smi_infos_lock); + mutex_unlock(&smi_infos_lock); printk(" IPMI %s interface initialized\n",si_to_str[new_smi->si_type]); @@ -2454,7 +2469,7 @@ static int try_smi_init(struct smi_info *new_smi) kfree(new_smi); - up(&smi_infos_lock); + mutex_unlock(&smi_infos_lock); return rv; } @@ -2512,26 +2527,26 @@ static __devinit int init_ipmi_si(void) #endif if (si_trydefaults) { - down(&smi_infos_lock); + mutex_lock(&smi_infos_lock); if (list_empty(&smi_infos)) { /* No BMC was found, try defaults. */ - up(&smi_infos_lock); + mutex_unlock(&smi_infos_lock); default_find_bmc(); } else { - up(&smi_infos_lock); + mutex_unlock(&smi_infos_lock); } } - down(&smi_infos_lock); + mutex_lock(&smi_infos_lock); if (list_empty(&smi_infos)) { - up(&smi_infos_lock); + mutex_unlock(&smi_infos_lock); #ifdef CONFIG_PCI pci_unregister_driver(&ipmi_pci_driver); #endif printk("ipmi_si: Unable to find any System Interface(s)\n"); return -ENODEV; } else { - up(&smi_infos_lock); + mutex_unlock(&smi_infos_lock); return 0; } } @@ -2607,10 +2622,10 @@ static __exit void cleanup_ipmi_si(void) pci_unregister_driver(&ipmi_pci_driver); #endif - down(&smi_infos_lock); + mutex_lock(&smi_infos_lock); list_for_each_entry_safe(e, tmp_e, &smi_infos, link) cleanup_one_si(e); - up(&smi_infos_lock); + mutex_unlock(&smi_infos_lock); driver_unregister(&ipmi_driver); } diff --git a/trunk/drivers/char/ipmi/ipmi_watchdog.c b/trunk/drivers/char/ipmi/ipmi_watchdog.c index 7ece9f3c8f70..2d11ddd99e55 100644 --- a/trunk/drivers/char/ipmi/ipmi_watchdog.c +++ b/trunk/drivers/char/ipmi/ipmi_watchdog.c @@ -39,6 +39,7 @@ #include #include #include +#include #include #include #include @@ -303,21 +304,22 @@ static int ipmi_heartbeat(void); static void panic_halt_ipmi_heartbeat(void); -/* We use a semaphore to make sure that only one thing can send a set +/* We use a mutex to make sure that only one thing can send a set timeout at one time, because we only have one copy of the data. - The semaphore is claimed when the set_timeout is sent and freed + The mutex is claimed when the set_timeout is sent and freed when both messages are free. */ static atomic_t set_timeout_tofree = ATOMIC_INIT(0); -static DECLARE_MUTEX(set_timeout_lock); +static DEFINE_MUTEX(set_timeout_lock); +static DECLARE_COMPLETION(set_timeout_wait); static void set_timeout_free_smi(struct ipmi_smi_msg *msg) { if (atomic_dec_and_test(&set_timeout_tofree)) - up(&set_timeout_lock); + complete(&set_timeout_wait); } static void set_timeout_free_recv(struct ipmi_recv_msg *msg) { if (atomic_dec_and_test(&set_timeout_tofree)) - up(&set_timeout_lock); + complete(&set_timeout_wait); } static struct ipmi_smi_msg set_timeout_smi_msg = { @@ -399,7 +401,7 @@ static int ipmi_set_timeout(int do_heartbeat) /* We can only send one of these at a time. */ - down(&set_timeout_lock); + mutex_lock(&set_timeout_lock); atomic_set(&set_timeout_tofree, 2); @@ -407,16 +409,21 @@ static int ipmi_set_timeout(int do_heartbeat) &set_timeout_recv_msg, &send_heartbeat_now); if (rv) { - up(&set_timeout_lock); - } else { - if ((do_heartbeat == IPMI_SET_TIMEOUT_FORCE_HB) - || ((send_heartbeat_now) - && (do_heartbeat == IPMI_SET_TIMEOUT_HB_IF_NECESSARY))) - { - rv = ipmi_heartbeat(); - } + mutex_unlock(&set_timeout_lock); + goto out; } + wait_for_completion(&set_timeout_wait); + + if ((do_heartbeat == IPMI_SET_TIMEOUT_FORCE_HB) + || ((send_heartbeat_now) + && (do_heartbeat == IPMI_SET_TIMEOUT_HB_IF_NECESSARY))) + { + rv = ipmi_heartbeat(); + } + mutex_unlock(&set_timeout_lock); + +out: return rv; } @@ -458,17 +465,17 @@ static void panic_halt_ipmi_set_timeout(void) The semaphore is claimed when the set_timeout is sent and freed when both messages are free. */ static atomic_t heartbeat_tofree = ATOMIC_INIT(0); -static DECLARE_MUTEX(heartbeat_lock); -static DECLARE_MUTEX_LOCKED(heartbeat_wait_lock); +static DEFINE_MUTEX(heartbeat_lock); +static DECLARE_COMPLETION(heartbeat_wait); static void heartbeat_free_smi(struct ipmi_smi_msg *msg) { if (atomic_dec_and_test(&heartbeat_tofree)) - up(&heartbeat_wait_lock); + complete(&heartbeat_wait); } static void heartbeat_free_recv(struct ipmi_recv_msg *msg) { if (atomic_dec_and_test(&heartbeat_tofree)) - up(&heartbeat_wait_lock); + complete(&heartbeat_wait); } static struct ipmi_smi_msg heartbeat_smi_msg = { @@ -511,14 +518,14 @@ static int ipmi_heartbeat(void) return ipmi_set_timeout(IPMI_SET_TIMEOUT_HB_IF_NECESSARY); } - down(&heartbeat_lock); + mutex_lock(&heartbeat_lock); atomic_set(&heartbeat_tofree, 2); /* Don't reset the timer if we have the timer turned off, that re-enables the watchdog. */ if (ipmi_watchdog_state == WDOG_TIMEOUT_NONE) { - up(&heartbeat_lock); + mutex_unlock(&heartbeat_lock); return 0; } @@ -539,14 +546,14 @@ static int ipmi_heartbeat(void) &heartbeat_recv_msg, 1); if (rv) { - up(&heartbeat_lock); + mutex_unlock(&heartbeat_lock); printk(KERN_WARNING PFX "heartbeat failure: %d\n", rv); return rv; } /* Wait for the heartbeat to be sent. */ - down(&heartbeat_wait_lock); + wait_for_completion(&heartbeat_wait); if (heartbeat_recv_msg.msg.data[0] != 0) { /* Got an error in the heartbeat response. It was already @@ -555,7 +562,7 @@ static int ipmi_heartbeat(void) rv = -EINVAL; } - up(&heartbeat_lock); + mutex_unlock(&heartbeat_lock); return rv; } @@ -589,7 +596,7 @@ static void panic_halt_ipmi_heartbeat(void) 1); } -static struct watchdog_info ident= +static struct watchdog_info ident = { .options = 0, /* WDIOF_SETTIMEOUT, */ .firmware_version = 1, @@ -790,13 +797,13 @@ static int ipmi_fasync(int fd, struct file *file, int on) static int ipmi_close(struct inode *ino, struct file *filep) { - if (iminor(ino)==WATCHDOG_MINOR) - { + if (iminor(ino) == WATCHDOG_MINOR) { if (expect_close == 42) { ipmi_watchdog_state = WDOG_TIMEOUT_NONE; ipmi_set_timeout(IPMI_SET_TIMEOUT_NO_HB); } else { - printk(KERN_CRIT PFX "Unexpected close, not stopping watchdog!\n"); + printk(KERN_CRIT PFX + "Unexpected close, not stopping watchdog!\n"); ipmi_heartbeat(); } clear_bit(0, &ipmi_wdog_open); diff --git a/trunk/drivers/char/istallion.c b/trunk/drivers/char/istallion.c index e5247f85a446..ef20c1fc9c4c 100644 --- a/trunk/drivers/char/istallion.c +++ b/trunk/drivers/char/istallion.c @@ -706,7 +706,6 @@ static int stli_portcmdstats(stliport_t *portp); static int stli_clrportstats(stliport_t *portp, comstats_t __user *cp); static int stli_getportstruct(stliport_t __user *arg); static int stli_getbrdstruct(stlibrd_t __user *arg); -static void *stli_memalloc(int len); static stlibrd_t *stli_allocbrd(void); static void stli_ecpinit(stlibrd_t *brdp); @@ -997,17 +996,6 @@ static int stli_parsebrd(stlconf_t *confp, char **argp) /*****************************************************************************/ -/* - * Local driver kernel malloc routine. - */ - -static void *stli_memalloc(int len) -{ - return((void *) kmalloc(len, GFP_KERNEL)); -} - -/*****************************************************************************/ - static int stli_open(struct tty_struct *tty, struct file *filp) { stlibrd_t *brdp; @@ -3227,13 +3215,12 @@ static int stli_initports(stlibrd_t *brdp) #endif for (i = 0, panelnr = 0, panelport = 0; (i < brdp->nrports); i++) { - portp = (stliport_t *) stli_memalloc(sizeof(stliport_t)); - if (portp == (stliport_t *) NULL) { + portp = kzalloc(sizeof(stliport_t), GFP_KERNEL); + if (!portp) { printk("STALLION: failed to allocate port structure\n"); continue; } - memset(portp, 0, sizeof(stliport_t)); portp->magic = STLI_PORTMAGIC; portp->portnr = i; portp->brdnr = brdp->brdnr; @@ -4610,14 +4597,13 @@ static stlibrd_t *stli_allocbrd(void) { stlibrd_t *brdp; - brdp = (stlibrd_t *) stli_memalloc(sizeof(stlibrd_t)); - if (brdp == (stlibrd_t *) NULL) { + brdp = kzalloc(sizeof(stlibrd_t), GFP_KERNEL); + if (!brdp) { printk(KERN_ERR "STALLION: failed to allocate memory " "(size=%d)\n", sizeof(stlibrd_t)); - return((stlibrd_t *) NULL); + return NULL; } - memset(brdp, 0, sizeof(stlibrd_t)); brdp->magic = STLI_BOARDMAGIC; return(brdp); } @@ -5210,12 +5196,12 @@ int __init stli_init(void) /* * Allocate a temporary write buffer. */ - stli_tmpwritebuf = (char *) stli_memalloc(STLI_TXBUFSIZE); - if (stli_tmpwritebuf == (char *) NULL) + stli_tmpwritebuf = kmalloc(STLI_TXBUFSIZE, GFP_KERNEL); + if (!stli_tmpwritebuf) printk(KERN_ERR "STALLION: failed to allocate memory " "(size=%d)\n", STLI_TXBUFSIZE); - stli_txcookbuf = stli_memalloc(STLI_TXBUFSIZE); - if (stli_txcookbuf == (char *) NULL) + stli_txcookbuf = kmalloc(STLI_TXBUFSIZE, GFP_KERNEL); + if (!stli_txcookbuf) printk(KERN_ERR "STALLION: failed to allocate memory " "(size=%d)\n", STLI_TXBUFSIZE); diff --git a/trunk/drivers/char/keyboard.c b/trunk/drivers/char/keyboard.c index 8b603b2d1c42..935670a3cd98 100644 --- a/trunk/drivers/char/keyboard.c +++ b/trunk/drivers/char/keyboard.c @@ -74,7 +74,7 @@ void compute_shiftstate(void); k_self, k_fn, k_spec, k_pad,\ k_dead, k_cons, k_cur, k_shift,\ k_meta, k_ascii, k_lock, k_lowercase,\ - k_slock, k_dead2, k_ignore, k_ignore + k_slock, k_dead2, k_brl, k_ignore typedef void (k_handler_fn)(struct vc_data *vc, unsigned char value, char up_flag, struct pt_regs *regs); @@ -100,7 +100,7 @@ static fn_handler_fn *fn_handler[] = { FN_HANDLERS }; const int max_vals[] = { 255, ARRAY_SIZE(func_table) - 1, ARRAY_SIZE(fn_handler) - 1, NR_PAD - 1, NR_DEAD - 1, 255, 3, NR_SHIFT - 1, 255, NR_ASCII - 1, NR_LOCK - 1, - 255, NR_LOCK - 1, 255 + 255, NR_LOCK - 1, 255, NR_BRL - 1 }; const int NR_TYPES = ARRAY_SIZE(max_vals); @@ -126,7 +126,7 @@ static unsigned long key_down[NBITS(KEY_MAX)]; /* keyboard key bitmap */ static unsigned char shift_down[NR_SHIFT]; /* shift state counters.. */ static int dead_key_next; static int npadch = -1; /* -1 or number assembled on pad */ -static unsigned char diacr; +static unsigned int diacr; static char rep; /* flag telling character repeat */ static unsigned char ledstate = 0xff; /* undefined */ @@ -394,22 +394,30 @@ void compute_shiftstate(void) * Otherwise, conclude that DIACR was not combining after all, * queue it and return CH. */ -static unsigned char handle_diacr(struct vc_data *vc, unsigned char ch) +static unsigned int handle_diacr(struct vc_data *vc, unsigned int ch) { - int d = diacr; + unsigned int d = diacr; unsigned int i; diacr = 0; - for (i = 0; i < accent_table_size; i++) { - if (accent_table[i].diacr == d && accent_table[i].base == ch) - return accent_table[i].result; + if ((d & ~0xff) == BRL_UC_ROW) { + if ((ch & ~0xff) == BRL_UC_ROW) + return d | ch; + } else { + for (i = 0; i < accent_table_size; i++) + if (accent_table[i].diacr == d && accent_table[i].base == ch) + return accent_table[i].result; } - if (ch == ' ' || ch == d) + if (ch == ' ' || ch == (BRL_UC_ROW|0) || ch == d) return d; - put_queue(vc, d); + if (kbd->kbdmode == VC_UNICODE) + to_utf8(vc, d); + else if (d < 0x100) + put_queue(vc, d); + return ch; } @@ -419,7 +427,10 @@ static unsigned char handle_diacr(struct vc_data *vc, unsigned char ch) static void fn_enter(struct vc_data *vc, struct pt_regs *regs) { if (diacr) { - put_queue(vc, diacr); + if (kbd->kbdmode == VC_UNICODE) + to_utf8(vc, diacr); + else if (diacr < 0x100) + put_queue(vc, diacr); diacr = 0; } put_queue(vc, 13); @@ -615,7 +626,7 @@ static void k_lowercase(struct vc_data *vc, unsigned char value, char up_flag, s printk(KERN_ERR "keyboard.c: k_lowercase was called - impossible\n"); } -static void k_self(struct vc_data *vc, unsigned char value, char up_flag, struct pt_regs *regs) +static void k_unicode(struct vc_data *vc, unsigned int value, char up_flag, struct pt_regs *regs) { if (up_flag) return; /* no action, if this is a key release */ @@ -628,7 +639,10 @@ static void k_self(struct vc_data *vc, unsigned char value, char up_flag, struct diacr = value; return; } - put_queue(vc, value); + if (kbd->kbdmode == VC_UNICODE) + to_utf8(vc, value); + else if (value < 0x100) + put_queue(vc, value); } /* @@ -636,13 +650,23 @@ static void k_self(struct vc_data *vc, unsigned char value, char up_flag, struct * dead keys modifying the same character. Very useful * for Vietnamese. */ -static void k_dead2(struct vc_data *vc, unsigned char value, char up_flag, struct pt_regs *regs) +static void k_deadunicode(struct vc_data *vc, unsigned int value, char up_flag, struct pt_regs *regs) { if (up_flag) return; diacr = (diacr ? handle_diacr(vc, value) : value); } +static void k_self(struct vc_data *vc, unsigned char value, char up_flag, struct pt_regs *regs) +{ + k_unicode(vc, value, up_flag, regs); +} + +static void k_dead2(struct vc_data *vc, unsigned char value, char up_flag, struct pt_regs *regs) +{ + k_deadunicode(vc, value, up_flag, regs); +} + /* * Obsolete - for backwards compatibility only */ @@ -650,7 +674,7 @@ static void k_dead(struct vc_data *vc, unsigned char value, char up_flag, struct { static unsigned char ret_diacr[NR_DEAD] = {'`', '\'', '^', '~', '"', ',' }; value = ret_diacr[value]; - k_dead2(vc, value, up_flag, regs); + k_deadunicode(vc, value, up_flag, regs); } static void k_cons(struct vc_data *vc, unsigned char value, char up_flag, struct pt_regs *regs) @@ -835,6 +859,62 @@ static void k_slock(struct vc_data *vc, unsigned char value, char up_flag, struc } } +/* by default, 300ms interval for combination release */ +static long brl_timeout = 300; +MODULE_PARM_DESC(brl_timeout, "Braille keys release delay in ms (0 for combination on first release, < 0 for dead characters)"); +module_param(brl_timeout, long, 0644); +static void k_brl(struct vc_data *vc, unsigned char value, char up_flag, struct pt_regs *regs) +{ + static unsigned pressed,committing; + static unsigned long releasestart; + + if (kbd->kbdmode != VC_UNICODE) { + if (!up_flag) + printk("keyboard mode must be unicode for braille patterns\n"); + return; + } + + if (!value) { + k_unicode(vc, BRL_UC_ROW, up_flag, regs); + return; + } + + if (value > 8) + return; + + if (brl_timeout < 0) { + k_deadunicode(vc, BRL_UC_ROW | (1 << (value - 1)), up_flag, regs); + return; + } + + if (up_flag) { + if (brl_timeout) { + if (!committing || + jiffies - releasestart > (brl_timeout * HZ) / 1000) { + committing = pressed; + releasestart = jiffies; + } + pressed &= ~(1 << (value - 1)); + if (!pressed) { + if (committing) { + k_unicode(vc, BRL_UC_ROW | committing, 0, regs); + committing = 0; + } + } + } else { + if (committing) { + k_unicode(vc, BRL_UC_ROW | committing, 0, regs); + committing = 0; + } + pressed &= ~(1 << (value - 1)); + } + } else { + pressed |= 1 << (value - 1); + if (!brl_timeout) + committing = pressed; + } +} + /* * The leds display either (i) the status of NumLock, CapsLock, ScrollLock, * or (ii) whatever pattern of lights people want to show using KDSETLED, @@ -1125,9 +1205,13 @@ static void kbd_keycode(unsigned int keycode, int down, } if (keycode > NR_KEYS) - return; + if (keycode >= KEY_BRL_DOT1 && keycode <= KEY_BRL_DOT8) + keysym = K(KT_BRL, keycode - KEY_BRL_DOT1 + 1); + else + return; + else + keysym = key_map[keycode]; - keysym = key_map[keycode]; type = KTYP(keysym); if (type < 0xf0) { diff --git a/trunk/drivers/char/stallion.c b/trunk/drivers/char/stallion.c index 3f5d6077f39c..a9c5a7230f89 100644 --- a/trunk/drivers/char/stallion.c +++ b/trunk/drivers/char/stallion.c @@ -504,7 +504,6 @@ static int stl_echmcaintr(stlbrd_t *brdp); static int stl_echpciintr(stlbrd_t *brdp); static int stl_echpci64intr(stlbrd_t *brdp); static void stl_offintr(void *private); -static void *stl_memalloc(int len); static stlbrd_t *stl_allocbrd(void); static stlport_t *stl_getport(int brdnr, int panelnr, int portnr); @@ -939,17 +938,6 @@ static int stl_parsebrd(stlconf_t *confp, char **argp) /*****************************************************************************/ -/* - * Local driver kernel memory allocation routine. - */ - -static void *stl_memalloc(int len) -{ - return (void *) kmalloc(len, GFP_KERNEL); -} - -/*****************************************************************************/ - /* * Allocate a new board structure. Fill out the basic info in it. */ @@ -958,14 +946,13 @@ static stlbrd_t *stl_allocbrd(void) { stlbrd_t *brdp; - brdp = (stlbrd_t *) stl_memalloc(sizeof(stlbrd_t)); - if (brdp == (stlbrd_t *) NULL) { + brdp = kzalloc(sizeof(stlbrd_t), GFP_KERNEL); + if (!brdp) { printk("STALLION: failed to allocate memory (size=%d)\n", sizeof(stlbrd_t)); - return (stlbrd_t *) NULL; + return NULL; } - memset(brdp, 0, sizeof(stlbrd_t)); brdp->magic = STL_BOARDMAGIC; return brdp; } @@ -1017,9 +1004,9 @@ static int stl_open(struct tty_struct *tty, struct file *filp) portp->refcount++; if ((portp->flags & ASYNC_INITIALIZED) == 0) { - if (portp->tx.buf == (char *) NULL) { - portp->tx.buf = (char *) stl_memalloc(STL_TXBUFSIZE); - if (portp->tx.buf == (char *) NULL) + if (!portp->tx.buf) { + portp->tx.buf = kmalloc(STL_TXBUFSIZE, GFP_KERNEL); + if (!portp->tx.buf) return -ENOMEM; portp->tx.head = portp->tx.buf; portp->tx.tail = portp->tx.buf; @@ -2178,13 +2165,12 @@ static int __init stl_initports(stlbrd_t *brdp, stlpanel_t *panelp) * each ports data structures. */ for (i = 0; (i < panelp->nrports); i++) { - portp = (stlport_t *) stl_memalloc(sizeof(stlport_t)); - if (portp == (stlport_t *) NULL) { + portp = kzalloc(sizeof(stlport_t), GFP_KERNEL); + if (!portp) { printk("STALLION: failed to allocate memory " "(size=%d)\n", sizeof(stlport_t)); break; } - memset(portp, 0, sizeof(stlport_t)); portp->magic = STL_PORTMAGIC; portp->portnr = i; @@ -2315,13 +2301,12 @@ static inline int stl_initeio(stlbrd_t *brdp) * can complete the setup. */ - panelp = (stlpanel_t *) stl_memalloc(sizeof(stlpanel_t)); - if (panelp == (stlpanel_t *) NULL) { + panelp = kzalloc(sizeof(stlpanel_t), GFP_KERNEL); + if (!panelp) { printk(KERN_WARNING "STALLION: failed to allocate memory " "(size=%d)\n", sizeof(stlpanel_t)); - return(-ENOMEM); + return -ENOMEM; } - memset(panelp, 0, sizeof(stlpanel_t)); panelp->magic = STL_PANELMAGIC; panelp->brdnr = brdp->brdnr; @@ -2490,13 +2475,12 @@ static inline int stl_initech(stlbrd_t *brdp) status = inb(ioaddr + ECH_PNLSTATUS); if ((status & ECH_PNLIDMASK) != nxtid) break; - panelp = (stlpanel_t *) stl_memalloc(sizeof(stlpanel_t)); - if (panelp == (stlpanel_t *) NULL) { + panelp = kzalloc(sizeof(stlpanel_t), GFP_KERNEL); + if (!panelp) { printk("STALLION: failed to allocate memory " "(size=%d)\n", sizeof(stlpanel_t)); break; } - memset(panelp, 0, sizeof(stlpanel_t)); panelp->magic = STL_PANELMAGIC; panelp->brdnr = brdp->brdnr; panelp->panelnr = panelnr; @@ -3074,8 +3058,8 @@ static int __init stl_init(void) /* * Allocate a temporary write buffer. */ - stl_tmpwritebuf = (char *) stl_memalloc(STL_TXBUFSIZE); - if (stl_tmpwritebuf == (char *) NULL) + stl_tmpwritebuf = kmalloc(STL_TXBUFSIZE, GFP_KERNEL); + if (!stl_tmpwritebuf) printk("STALLION: failed to allocate memory (size=%d)\n", STL_TXBUFSIZE); diff --git a/trunk/drivers/char/tty_io.c b/trunk/drivers/char/tty_io.c index 0bfd1b63662e..98b126c2ded8 100644 --- a/trunk/drivers/char/tty_io.c +++ b/trunk/drivers/char/tty_io.c @@ -376,7 +376,7 @@ int tty_insert_flip_string(struct tty_struct *tty, const unsigned char *chars, s return copied; } -EXPORT_SYMBOL_GPL(tty_insert_flip_string); +EXPORT_SYMBOL(tty_insert_flip_string); int tty_insert_flip_string_flags(struct tty_struct *tty, const unsigned char *chars, const char *flags, size_t size) { diff --git a/trunk/drivers/char/vt.c b/trunk/drivers/char/vt.c index ca4844c527da..acc5d47844eb 100644 --- a/trunk/drivers/char/vt.c +++ b/trunk/drivers/char/vt.c @@ -2328,6 +2328,10 @@ int tioclinux(struct tty_struct *tty, unsigned long arg) case TIOCL_SETVESABLANK: set_vesa_blanking(p); break; + case TIOCL_GETKMSGREDIRECT: + data = kmsg_redirect; + ret = __put_user(data, p); + break; case TIOCL_SETKMSGREDIRECT: if (!capable(CAP_SYS_ADMIN)) { ret = -EPERM; diff --git a/trunk/drivers/edac/Kconfig b/trunk/drivers/edac/Kconfig index b582d0cdc24f..4f0898400c6d 100644 --- a/trunk/drivers/edac/Kconfig +++ b/trunk/drivers/edac/Kconfig @@ -71,7 +71,7 @@ config EDAC_E7XXX config EDAC_E752X tristate "Intel e752x (e7520, e7525, e7320)" - depends on EDAC_MM_EDAC && PCI && X86 + depends on EDAC_MM_EDAC && PCI && X86 && HOTPLUG help Support for error detection and correction on the Intel E7520, E7525, E7320 server chipsets. diff --git a/trunk/drivers/ide/ide-disk.c b/trunk/drivers/ide/ide-disk.c index ccf528d733bf..a5017de72da5 100644 --- a/trunk/drivers/ide/ide-disk.c +++ b/trunk/drivers/ide/ide-disk.c @@ -61,6 +61,7 @@ #include #include #include +#include #define _IDE_DISK @@ -317,6 +318,8 @@ static ide_startstop_t ide_do_rw_disk (ide_drive_t *drive, struct request *rq, s return ide_stopped; } + ledtrig_ide_activity(); + pr_debug("%s: %sing: block=%llu, sectors=%lu, buffer=0x%08lx\n", drive->name, rq_data_dir(rq) == READ ? "read" : "writ", (unsigned long long)block, rq->nr_sectors, diff --git a/trunk/drivers/ide/ide-taskfile.c b/trunk/drivers/ide/ide-taskfile.c index 0606bd2f6020..9233b8109a0f 100644 --- a/trunk/drivers/ide/ide-taskfile.c +++ b/trunk/drivers/ide/ide-taskfile.c @@ -375,7 +375,13 @@ static void task_end_request(ide_drive_t *drive, struct request *rq, u8 stat) } } - ide_end_request(drive, 1, rq->hard_nr_sectors); + if (rq->rq_disk) { + ide_driver_t *drv; + + drv = *(ide_driver_t **)rq->rq_disk->private_data;; + drv->end_request(drive, 1, rq->hard_nr_sectors); + } else + ide_end_request(drive, 1, rq->hard_nr_sectors); } /* diff --git a/trunk/drivers/input/evbug.c b/trunk/drivers/input/evbug.c index d7828936fd8f..07358fb51b82 100644 --- a/trunk/drivers/input/evbug.c +++ b/trunk/drivers/input/evbug.c @@ -49,9 +49,8 @@ static struct input_handle *evbug_connect(struct input_handler *handler, struct { struct input_handle *handle; - if (!(handle = kmalloc(sizeof(struct input_handle), GFP_KERNEL))) + if (!(handle = kzalloc(sizeof(struct input_handle), GFP_KERNEL))) return NULL; - memset(handle, 0, sizeof(struct input_handle)); handle->dev = dev; handle->handler = handler; diff --git a/trunk/drivers/input/evdev.c b/trunk/drivers/input/evdev.c index 745979f33dc2..a34e3d91d9ed 100644 --- a/trunk/drivers/input/evdev.c +++ b/trunk/drivers/input/evdev.c @@ -130,9 +130,8 @@ static int evdev_open(struct inode * inode, struct file * file) if ((accept_err = input_accept_process(&(evdev_table[i]->handle), file))) return accept_err; - if (!(list = kmalloc(sizeof(struct evdev_list), GFP_KERNEL))) + if (!(list = kzalloc(sizeof(struct evdev_list), GFP_KERNEL))) return -ENOMEM; - memset(list, 0, sizeof(struct evdev_list)); list->evdev = evdev_table[i]; list_add_tail(&list->node, &evdev_table[i]->list); @@ -609,9 +608,8 @@ static struct input_handle *evdev_connect(struct input_handler *handler, struct return NULL; } - if (!(evdev = kmalloc(sizeof(struct evdev), GFP_KERNEL))) + if (!(evdev = kzalloc(sizeof(struct evdev), GFP_KERNEL))) return NULL; - memset(evdev, 0, sizeof(struct evdev)); INIT_LIST_HEAD(&evdev->list); init_waitqueue_head(&evdev->wait); diff --git a/trunk/drivers/input/gameport/gameport.c b/trunk/drivers/input/gameport/gameport.c index b765a155c008..36644bff379d 100644 --- a/trunk/drivers/input/gameport/gameport.c +++ b/trunk/drivers/input/gameport/gameport.c @@ -22,6 +22,7 @@ #include #include #include /* HZ */ +#include /*#include */ @@ -43,10 +44,10 @@ EXPORT_SYMBOL(gameport_start_polling); EXPORT_SYMBOL(gameport_stop_polling); /* - * gameport_sem protects entire gameport subsystem and is taken + * gameport_mutex protects entire gameport subsystem and is taken * every time gameport port or driver registrered or unregistered. */ -static DECLARE_MUTEX(gameport_sem); +static DEFINE_MUTEX(gameport_mutex); static LIST_HEAD(gameport_list); @@ -265,6 +266,7 @@ static void gameport_queue_event(void *object, struct module *owner, if ((event = kmalloc(sizeof(struct gameport_event), GFP_ATOMIC))) { if (!try_module_get(owner)) { printk(KERN_WARNING "gameport: Can't get module reference, dropping event %d\n", event_type); + kfree(event); goto out; } @@ -342,7 +344,7 @@ static void gameport_handle_event(void) struct gameport_event *event; struct gameport_driver *gameport_drv; - down(&gameport_sem); + mutex_lock(&gameport_mutex); /* * Note that we handle only one event here to give swsusp @@ -379,7 +381,7 @@ static void gameport_handle_event(void) gameport_free_event(event); } - up(&gameport_sem); + mutex_unlock(&gameport_mutex); } /* @@ -464,7 +466,7 @@ static ssize_t gameport_rebind_driver(struct device *dev, struct device_attribut struct device_driver *drv; int retval; - retval = down_interruptible(&gameport_sem); + retval = mutex_lock_interruptible(&gameport_mutex); if (retval) return retval; @@ -484,7 +486,7 @@ static ssize_t gameport_rebind_driver(struct device *dev, struct device_attribut retval = -EINVAL; } - up(&gameport_sem); + mutex_unlock(&gameport_mutex); return retval; } @@ -521,7 +523,7 @@ static void gameport_init_port(struct gameport *gameport) __module_get(THIS_MODULE); - init_MUTEX(&gameport->drv_sem); + mutex_init(&gameport->drv_mutex); device_initialize(&gameport->dev); snprintf(gameport->dev.bus_id, sizeof(gameport->dev.bus_id), "gameport%lu", (unsigned long)atomic_inc_return(&gameport_no) - 1); @@ -661,10 +663,10 @@ void __gameport_register_port(struct gameport *gameport, struct module *owner) */ void gameport_unregister_port(struct gameport *gameport) { - down(&gameport_sem); + mutex_lock(&gameport_mutex); gameport_disconnect_port(gameport); gameport_destroy_port(gameport); - up(&gameport_sem); + mutex_unlock(&gameport_mutex); } @@ -717,7 +719,7 @@ void gameport_unregister_driver(struct gameport_driver *drv) { struct gameport *gameport; - down(&gameport_sem); + mutex_lock(&gameport_mutex); drv->ignore = 1; /* so gameport_find_driver ignores it */ start_over: @@ -731,7 +733,7 @@ void gameport_unregister_driver(struct gameport_driver *drv) } driver_unregister(&drv->driver); - up(&gameport_sem); + mutex_unlock(&gameport_mutex); } static int gameport_bus_match(struct device *dev, struct device_driver *drv) @@ -743,9 +745,9 @@ static int gameport_bus_match(struct device *dev, struct device_driver *drv) static void gameport_set_drv(struct gameport *gameport, struct gameport_driver *drv) { - down(&gameport->drv_sem); + mutex_lock(&gameport->drv_mutex); gameport->drv = drv; - up(&gameport->drv_sem); + mutex_unlock(&gameport->drv_mutex); } int gameport_open(struct gameport *gameport, struct gameport_driver *drv, int mode) @@ -796,5 +798,5 @@ static void __exit gameport_exit(void) kthread_stop(gameport_task); } -module_init(gameport_init); +subsys_initcall(gameport_init); module_exit(gameport_exit); diff --git a/trunk/drivers/input/gameport/ns558.c b/trunk/drivers/input/gameport/ns558.c index d2e55dc956ba..3e2d28f263e9 100644 --- a/trunk/drivers/input/gameport/ns558.c +++ b/trunk/drivers/input/gameport/ns558.c @@ -252,14 +252,14 @@ static struct pnp_driver ns558_pnp_driver; #endif -static int pnp_registered = 0; - static int __init ns558_init(void) { int i = 0; + int error; - if (pnp_register_driver(&ns558_pnp_driver) >= 0) - pnp_registered = 1; + error = pnp_register_driver(&ns558_pnp_driver); + if (error && error != -ENODEV) /* should be ENOSYS really */ + return error; /* * Probe ISA ports after PnP, so that PnP ports that are already @@ -270,7 +270,7 @@ static int __init ns558_init(void) while (ns558_isa_portlist[i]) ns558_isa_probe(ns558_isa_portlist[i++]); - return (list_empty(&ns558_list) && !pnp_registered) ? -ENODEV : 0; + return list_empty(&ns558_list) && error ? -ENODEV : 0; } static void __exit ns558_exit(void) @@ -283,8 +283,7 @@ static void __exit ns558_exit(void) kfree(ns558); } - if (pnp_registered) - pnp_unregister_driver(&ns558_pnp_driver); + pnp_unregister_driver(&ns558_pnp_driver); } module_init(ns558_init); diff --git a/trunk/drivers/input/input.c b/trunk/drivers/input/input.c index f8af0945964e..a935abeffffc 100644 --- a/trunk/drivers/input/input.c +++ b/trunk/drivers/input/input.c @@ -18,9 +18,11 @@ #include #include #include +#include #include #include #include +#include MODULE_AUTHOR("Vojtech Pavlik "); MODULE_DESCRIPTION("Input core"); @@ -224,7 +226,7 @@ int input_open_device(struct input_handle *handle) struct input_dev *dev = handle->dev; int err; - err = down_interruptible(&dev->sem); + err = mutex_lock_interruptible(&dev->mutex); if (err) return err; @@ -236,7 +238,7 @@ int input_open_device(struct input_handle *handle) if (err) handle->open--; - up(&dev->sem); + mutex_unlock(&dev->mutex); return err; } @@ -255,13 +257,13 @@ void input_close_device(struct input_handle *handle) input_release_device(handle); - down(&dev->sem); + mutex_lock(&dev->mutex); if (!--dev->users && dev->close) dev->close(dev); handle->open--; - up(&dev->sem); + mutex_unlock(&dev->mutex); } static void input_link_handle(struct input_handle *handle) @@ -315,21 +317,6 @@ static struct input_device_id *input_match_device(struct input_device_id *id, st return NULL; } -static int input_print_bitmap(char *buf, int buf_size, unsigned long *bitmap, int max) -{ - int i; - int len = 0; - - for (i = NBITS(max) - 1; i > 0; i--) - if (bitmap[i]) - break; - - for (; i >= 0; i--) - len += snprintf(buf + len, max(buf_size - len, 0), - "%lx%s", bitmap[i], i > 0 ? " " : ""); - return len; -} - #ifdef CONFIG_PROC_FS static struct proc_dir_entry *proc_bus_input_dir; @@ -342,7 +329,7 @@ static inline void input_wakeup_procfs_readers(void) wake_up(&input_devices_poll_wait); } -static unsigned int input_devices_poll(struct file *file, poll_table *wait) +static unsigned int input_proc_devices_poll(struct file *file, poll_table *wait) { int state = input_devices_state; poll_wait(file, &input_devices_poll_wait, wait); @@ -351,115 +338,171 @@ static unsigned int input_devices_poll(struct file *file, poll_table *wait) return 0; } -#define SPRINTF_BIT(ev, bm) \ - do { \ - len += sprintf(buf + len, "B: %s=", #ev); \ - len += input_print_bitmap(buf + len, INT_MAX, \ - dev->bm##bit, ev##_MAX); \ - len += sprintf(buf + len, "\n"); \ - } while (0) +static struct list_head *list_get_nth_element(struct list_head *list, loff_t *pos) +{ + struct list_head *node; + loff_t i = 0; -#define TEST_AND_SPRINTF_BIT(ev, bm) \ - do { \ - if (test_bit(EV_##ev, dev->evbit)) \ - SPRINTF_BIT(ev, bm); \ - } while (0) + list_for_each(node, list) + if (i++ == *pos) + return node; + + return NULL; +} -static int input_devices_read(char *buf, char **start, off_t pos, int count, int *eof, void *data) +static struct list_head *list_get_next_element(struct list_head *list, struct list_head *element, loff_t *pos) { - struct input_dev *dev; - struct input_handle *handle; - const char *path; + if (element->next == list) + return NULL; + + ++(*pos); + return element->next; +} + +static void *input_devices_seq_start(struct seq_file *seq, loff_t *pos) +{ + /* acquire lock here ... Yes, we do need locking, I knowi, I know... */ + + return list_get_nth_element(&input_dev_list, pos); +} - off_t at = 0; - int len, cnt = 0; +static void *input_devices_seq_next(struct seq_file *seq, void *v, loff_t *pos) +{ + return list_get_next_element(&input_dev_list, v, pos); +} - list_for_each_entry(dev, &input_dev_list, node) { +static void input_devices_seq_stop(struct seq_file *seq, void *v) +{ + /* release lock here */ +} - path = kobject_get_path(&dev->cdev.kobj, GFP_KERNEL); +static void input_seq_print_bitmap(struct seq_file *seq, const char *name, + unsigned long *bitmap, int max) +{ + int i; - len = sprintf(buf, "I: Bus=%04x Vendor=%04x Product=%04x Version=%04x\n", - dev->id.bustype, dev->id.vendor, dev->id.product, dev->id.version); + for (i = NBITS(max) - 1; i > 0; i--) + if (bitmap[i]) + break; - len += sprintf(buf + len, "N: Name=\"%s\"\n", dev->name ? dev->name : ""); - len += sprintf(buf + len, "P: Phys=%s\n", dev->phys ? dev->phys : ""); - len += sprintf(buf + len, "S: Sysfs=%s\n", path ? path : ""); - len += sprintf(buf + len, "H: Handlers="); + seq_printf(seq, "B: %s=", name); + for (; i >= 0; i--) + seq_printf(seq, "%lx%s", bitmap[i], i > 0 ? " " : ""); + seq_putc(seq, '\n'); +} - list_for_each_entry(handle, &dev->h_list, d_node) - len += sprintf(buf + len, "%s ", handle->name); - - len += sprintf(buf + len, "\n"); - - SPRINTF_BIT(EV, ev); - TEST_AND_SPRINTF_BIT(KEY, key); - TEST_AND_SPRINTF_BIT(REL, rel); - TEST_AND_SPRINTF_BIT(ABS, abs); - TEST_AND_SPRINTF_BIT(MSC, msc); - TEST_AND_SPRINTF_BIT(LED, led); - TEST_AND_SPRINTF_BIT(SND, snd); - TEST_AND_SPRINTF_BIT(FF, ff); - TEST_AND_SPRINTF_BIT(SW, sw); - - len += sprintf(buf + len, "\n"); - - at += len; - - if (at >= pos) { - if (!*start) { - *start = buf + (pos - (at - len)); - cnt = at - pos; - } else cnt += len; - buf += len; - if (cnt >= count) - break; - } +static int input_devices_seq_show(struct seq_file *seq, void *v) +{ + struct input_dev *dev = container_of(v, struct input_dev, node); + const char *path = kobject_get_path(&dev->cdev.kobj, GFP_KERNEL); + struct input_handle *handle; - kfree(path); - } + seq_printf(seq, "I: Bus=%04x Vendor=%04x Product=%04x Version=%04x\n", + dev->id.bustype, dev->id.vendor, dev->id.product, dev->id.version); - if (&dev->node == &input_dev_list) - *eof = 1; + seq_printf(seq, "N: Name=\"%s\"\n", dev->name ? dev->name : ""); + seq_printf(seq, "P: Phys=%s\n", dev->phys ? dev->phys : ""); + seq_printf(seq, "S: Sysfs=%s\n", path ? path : ""); + seq_printf(seq, "H: Handlers="); - return (count > cnt) ? cnt : count; + list_for_each_entry(handle, &dev->h_list, d_node) + seq_printf(seq, "%s ", handle->name); + seq_putc(seq, '\n'); + + input_seq_print_bitmap(seq, "EV", dev->evbit, EV_MAX); + if (test_bit(EV_KEY, dev->evbit)) + input_seq_print_bitmap(seq, "KEY", dev->keybit, KEY_MAX); + if (test_bit(EV_REL, dev->evbit)) + input_seq_print_bitmap(seq, "REL", dev->relbit, REL_MAX); + if (test_bit(EV_ABS, dev->evbit)) + input_seq_print_bitmap(seq, "ABS", dev->absbit, ABS_MAX); + if (test_bit(EV_MSC, dev->evbit)) + input_seq_print_bitmap(seq, "MSC", dev->mscbit, MSC_MAX); + if (test_bit(EV_LED, dev->evbit)) + input_seq_print_bitmap(seq, "LED", dev->ledbit, LED_MAX); + if (test_bit(EV_SND, dev->evbit)) + input_seq_print_bitmap(seq, "SND", dev->sndbit, SND_MAX); + if (test_bit(EV_FF, dev->evbit)) + input_seq_print_bitmap(seq, "FF", dev->ffbit, FF_MAX); + if (test_bit(EV_SW, dev->evbit)) + input_seq_print_bitmap(seq, "SW", dev->swbit, SW_MAX); + + seq_putc(seq, '\n'); + + kfree(path); + return 0; } -static int input_handlers_read(char *buf, char **start, off_t pos, int count, int *eof, void *data) +static struct seq_operations input_devices_seq_ops = { + .start = input_devices_seq_start, + .next = input_devices_seq_next, + .stop = input_devices_seq_stop, + .show = input_devices_seq_show, +}; + +static int input_proc_devices_open(struct inode *inode, struct file *file) { - struct input_handler *handler; + return seq_open(file, &input_devices_seq_ops); +} - off_t at = 0; - int len = 0, cnt = 0; - int i = 0; +static struct file_operations input_devices_fileops = { + .owner = THIS_MODULE, + .open = input_proc_devices_open, + .poll = input_proc_devices_poll, + .read = seq_read, + .llseek = seq_lseek, + .release = seq_release, +}; - list_for_each_entry(handler, &input_handler_list, node) { +static void *input_handlers_seq_start(struct seq_file *seq, loff_t *pos) +{ + /* acquire lock here ... Yes, we do need locking, I knowi, I know... */ + seq->private = (void *)(unsigned long)*pos; + return list_get_nth_element(&input_handler_list, pos); +} + +static void *input_handlers_seq_next(struct seq_file *seq, void *v, loff_t *pos) +{ + seq->private = (void *)(unsigned long)(*pos + 1); + return list_get_next_element(&input_handler_list, v, pos); +} - if (handler->fops) - len = sprintf(buf, "N: Number=%d Name=%s Minor=%d\n", - i++, handler->name, handler->minor); - else - len = sprintf(buf, "N: Number=%d Name=%s\n", - i++, handler->name); +static void input_handlers_seq_stop(struct seq_file *seq, void *v) +{ + /* release lock here */ +} - at += len; +static int input_handlers_seq_show(struct seq_file *seq, void *v) +{ + struct input_handler *handler = container_of(v, struct input_handler, node); - if (at >= pos) { - if (!*start) { - *start = buf + (pos - (at - len)); - cnt = at - pos; - } else cnt += len; - buf += len; - if (cnt >= count) - break; - } - } - if (&handler->node == &input_handler_list) - *eof = 1; + seq_printf(seq, "N: Number=%ld Name=%s", + (unsigned long)seq->private, handler->name); + if (handler->fops) + seq_printf(seq, " Minor=%d", handler->minor); + seq_putc(seq, '\n'); - return (count > cnt) ? cnt : count; + return 0; } +static struct seq_operations input_handlers_seq_ops = { + .start = input_handlers_seq_start, + .next = input_handlers_seq_next, + .stop = input_handlers_seq_stop, + .show = input_handlers_seq_show, +}; -static struct file_operations input_fileops; +static int input_proc_handlers_open(struct inode *inode, struct file *file) +{ + return seq_open(file, &input_handlers_seq_ops); +} + +static struct file_operations input_handlers_fileops = { + .owner = THIS_MODULE, + .open = input_proc_handlers_open, + .read = seq_read, + .llseek = seq_lseek, + .release = seq_release, +}; static int __init input_proc_init(void) { @@ -471,20 +514,19 @@ static int __init input_proc_init(void) proc_bus_input_dir->owner = THIS_MODULE; - entry = create_proc_read_entry("devices", 0, proc_bus_input_dir, input_devices_read, NULL); + entry = create_proc_entry("devices", 0, proc_bus_input_dir); if (!entry) goto fail1; entry->owner = THIS_MODULE; - input_fileops = *entry->proc_fops; - input_fileops.poll = input_devices_poll; - entry->proc_fops = &input_fileops; + entry->proc_fops = &input_devices_fileops; - entry = create_proc_read_entry("handlers", 0, proc_bus_input_dir, input_handlers_read, NULL); + entry = create_proc_entry("handlers", 0, proc_bus_input_dir); if (!entry) goto fail2; entry->owner = THIS_MODULE; + entry->proc_fops = &input_handlers_fileops; return 0; @@ -512,13 +554,14 @@ static ssize_t input_dev_show_##name(struct class_device *dev, char *buf) \ struct input_dev *input_dev = to_input_dev(dev); \ int retval; \ \ - retval = down_interruptible(&input_dev->sem); \ + retval = mutex_lock_interruptible(&input_dev->mutex); \ if (retval) \ return retval; \ \ - retval = sprintf(buf, "%s\n", input_dev->name ? input_dev->name : ""); \ + retval = scnprintf(buf, PAGE_SIZE, \ + "%s\n", input_dev->name ? input_dev->name : ""); \ \ - up(&input_dev->sem); \ + mutex_unlock(&input_dev->mutex); \ \ return retval; \ } \ @@ -528,46 +571,51 @@ INPUT_DEV_STRING_ATTR_SHOW(name); INPUT_DEV_STRING_ATTR_SHOW(phys); INPUT_DEV_STRING_ATTR_SHOW(uniq); -static int print_modalias_bits(char *buf, int size, char prefix, unsigned long *arr, - unsigned int min, unsigned int max) +static int input_print_modalias_bits(char *buf, int size, + char name, unsigned long *bm, + unsigned int min_bit, unsigned int max_bit) { - int len, i; + int len = 0, i; - len = snprintf(buf, size, "%c", prefix); - for (i = min; i < max; i++) - if (arr[LONG(i)] & BIT(i)) - len += snprintf(buf + len, size - len, "%X,", i); + len += snprintf(buf, max(size, 0), "%c", name); + for (i = min_bit; i < max_bit; i++) + if (bm[LONG(i)] & BIT(i)) + len += snprintf(buf + len, max(size - len, 0), "%X,", i); return len; } -static int print_modalias(char *buf, int size, struct input_dev *id) +static int input_print_modalias(char *buf, int size, struct input_dev *id, + int add_cr) { int len; - len = snprintf(buf, size, "input:b%04Xv%04Xp%04Xe%04X-", - id->id.bustype, - id->id.vendor, - id->id.product, - id->id.version); - - len += print_modalias_bits(buf + len, size - len, 'e', id->evbit, - 0, EV_MAX); - len += print_modalias_bits(buf + len, size - len, 'k', id->keybit, - KEY_MIN_INTERESTING, KEY_MAX); - len += print_modalias_bits(buf + len, size - len, 'r', id->relbit, - 0, REL_MAX); - len += print_modalias_bits(buf + len, size - len, 'a', id->absbit, - 0, ABS_MAX); - len += print_modalias_bits(buf + len, size - len, 'm', id->mscbit, - 0, MSC_MAX); - len += print_modalias_bits(buf + len, size - len, 'l', id->ledbit, - 0, LED_MAX); - len += print_modalias_bits(buf + len, size - len, 's', id->sndbit, - 0, SND_MAX); - len += print_modalias_bits(buf + len, size - len, 'f', id->ffbit, - 0, FF_MAX); - len += print_modalias_bits(buf + len, size - len, 'w', id->swbit, - 0, SW_MAX); + len = snprintf(buf, max(size, 0), + "input:b%04Xv%04Xp%04Xe%04X-", + id->id.bustype, id->id.vendor, + id->id.product, id->id.version); + + len += input_print_modalias_bits(buf + len, size - len, + 'e', id->evbit, 0, EV_MAX); + len += input_print_modalias_bits(buf + len, size - len, + 'k', id->keybit, KEY_MIN_INTERESTING, KEY_MAX); + len += input_print_modalias_bits(buf + len, size - len, + 'r', id->relbit, 0, REL_MAX); + len += input_print_modalias_bits(buf + len, size - len, + 'a', id->absbit, 0, ABS_MAX); + len += input_print_modalias_bits(buf + len, size - len, + 'm', id->mscbit, 0, MSC_MAX); + len += input_print_modalias_bits(buf + len, size - len, + 'l', id->ledbit, 0, LED_MAX); + len += input_print_modalias_bits(buf + len, size - len, + 's', id->sndbit, 0, SND_MAX); + len += input_print_modalias_bits(buf + len, size - len, + 'f', id->ffbit, 0, FF_MAX); + len += input_print_modalias_bits(buf + len, size - len, + 'w', id->swbit, 0, SW_MAX); + + if (add_cr) + len += snprintf(buf + len, max(size - len, 0), "\n"); + return len; } @@ -576,9 +624,9 @@ static ssize_t input_dev_show_modalias(struct class_device *dev, char *buf) struct input_dev *id = to_input_dev(dev); ssize_t len; - len = print_modalias(buf, PAGE_SIZE, id); - len += snprintf(buf + len, PAGE_SIZE-len, "\n"); - return len; + len = input_print_modalias(buf, PAGE_SIZE, id, 1); + + return max_t(int, len, PAGE_SIZE); } static CLASS_DEVICE_ATTR(modalias, S_IRUGO, input_dev_show_modalias, NULL); @@ -598,7 +646,7 @@ static struct attribute_group input_dev_attr_group = { static ssize_t input_dev_show_id_##name(struct class_device *dev, char *buf) \ { \ struct input_dev *input_dev = to_input_dev(dev); \ - return sprintf(buf, "%04x\n", input_dev->id.name); \ + return scnprintf(buf, PAGE_SIZE, "%04x\n", input_dev->id.name); \ } \ static CLASS_DEVICE_ATTR(name, S_IRUGO, input_dev_show_id_##name, NULL); @@ -620,11 +668,33 @@ static struct attribute_group input_dev_id_attr_group = { .attrs = input_dev_id_attrs, }; +static int input_print_bitmap(char *buf, int buf_size, unsigned long *bitmap, + int max, int add_cr) +{ + int i; + int len = 0; + + for (i = NBITS(max) - 1; i > 0; i--) + if (bitmap[i]) + break; + + for (; i >= 0; i--) + len += snprintf(buf + len, max(buf_size - len, 0), + "%lx%s", bitmap[i], i > 0 ? " " : ""); + + if (add_cr) + len += snprintf(buf + len, max(buf_size - len, 0), "\n"); + + return len; +} + #define INPUT_DEV_CAP_ATTR(ev, bm) \ static ssize_t input_dev_show_cap_##bm(struct class_device *dev, char *buf) \ { \ struct input_dev *input_dev = to_input_dev(dev); \ - return input_print_bitmap(buf, PAGE_SIZE, input_dev->bm##bit, ev##_MAX);\ + int len = input_print_bitmap(buf, PAGE_SIZE, \ + input_dev->bm##bit, ev##_MAX, 1); \ + return min_t(int, len, PAGE_SIZE); \ } \ static CLASS_DEVICE_ATTR(bm, S_IRUGO, input_dev_show_cap_##bm, NULL); @@ -669,8 +739,8 @@ static void input_dev_release(struct class_device *class_dev) * device bitfields. */ static int input_add_uevent_bm_var(char **envp, int num_envp, int *cur_index, - char *buffer, int buffer_size, int *cur_len, - const char *name, unsigned long *bitmap, int max) + char *buffer, int buffer_size, int *cur_len, + const char *name, unsigned long *bitmap, int max) { if (*cur_index >= num_envp - 1) return -ENOMEM; @@ -678,12 +748,36 @@ static int input_add_uevent_bm_var(char **envp, int num_envp, int *cur_index, envp[*cur_index] = buffer + *cur_len; *cur_len += snprintf(buffer + *cur_len, max(buffer_size - *cur_len, 0), name); - if (*cur_len > buffer_size) + if (*cur_len >= buffer_size) return -ENOMEM; *cur_len += input_print_bitmap(buffer + *cur_len, max(buffer_size - *cur_len, 0), - bitmap, max) + 1; + bitmap, max, 0) + 1; + if (*cur_len > buffer_size) + return -ENOMEM; + + (*cur_index)++; + return 0; +} + +static int input_add_uevent_modalias_var(char **envp, int num_envp, int *cur_index, + char *buffer, int buffer_size, int *cur_len, + struct input_dev *dev) +{ + if (*cur_index >= num_envp - 1) + return -ENOMEM; + + envp[*cur_index] = buffer + *cur_len; + + *cur_len += snprintf(buffer + *cur_len, max(buffer_size - *cur_len, 0), + "MODALIAS="); + if (*cur_len >= buffer_size) + return -ENOMEM; + + *cur_len += input_print_modalias(buffer + *cur_len, + max(buffer_size - *cur_len, 0), + dev, 0) + 1; if (*cur_len > buffer_size) return -ENOMEM; @@ -693,7 +787,7 @@ static int input_add_uevent_bm_var(char **envp, int num_envp, int *cur_index, #define INPUT_ADD_HOTPLUG_VAR(fmt, val...) \ do { \ - int err = add_uevent_var(envp, num_envp, &i, \ + int err = add_uevent_var(envp, num_envp, &i, \ buffer, buffer_size, &len, \ fmt, val); \ if (err) \ @@ -709,6 +803,16 @@ static int input_add_uevent_bm_var(char **envp, int num_envp, int *cur_index, return err; \ } while (0) +#define INPUT_ADD_HOTPLUG_MODALIAS_VAR(dev) \ + do { \ + int err = input_add_uevent_modalias_var(envp, \ + num_envp, &i, \ + buffer, buffer_size, &len, \ + dev); \ + if (err) \ + return err; \ + } while (0) + static int input_dev_uevent(struct class_device *cdev, char **envp, int num_envp, char *buffer, int buffer_size) { @@ -744,9 +848,7 @@ static int input_dev_uevent(struct class_device *cdev, char **envp, if (test_bit(EV_SW, dev->evbit)) INPUT_ADD_HOTPLUG_BM_VAR("SW=", dev->swbit, SW_MAX); - envp[i++] = buffer + len; - len += snprintf(buffer + len, buffer_size - len, "MODALIAS="); - len += print_modalias(buffer + len, buffer_size - len, dev) + 1; + INPUT_ADD_HOTPLUG_MODALIAS_VAR(dev); envp[i] = NULL; return 0; @@ -790,7 +892,7 @@ int input_register_device(struct input_dev *dev) return -EINVAL; } - init_MUTEX(&dev->sem); + mutex_init(&dev->mutex); set_bit(EV_SYN, dev->evbit); /* diff --git a/trunk/drivers/input/joydev.c b/trunk/drivers/input/joydev.c index 20e2972b9204..949bdcef8c2b 100644 --- a/trunk/drivers/input/joydev.c +++ b/trunk/drivers/input/joydev.c @@ -171,9 +171,8 @@ static int joydev_open(struct inode *inode, struct file *file) if (i >= JOYDEV_MINORS || !joydev_table[i]) return -ENODEV; - if (!(list = kmalloc(sizeof(struct joydev_list), GFP_KERNEL))) + if (!(list = kzalloc(sizeof(struct joydev_list), GFP_KERNEL))) return -ENOMEM; - memset(list, 0, sizeof(struct joydev_list)); list->joydev = joydev_table[i]; list_add_tail(&list->node, &joydev_table[i]->list); @@ -457,9 +456,8 @@ static struct input_handle *joydev_connect(struct input_handler *handler, struct return NULL; } - if (!(joydev = kmalloc(sizeof(struct joydev), GFP_KERNEL))) + if (!(joydev = kzalloc(sizeof(struct joydev), GFP_KERNEL))) return NULL; - memset(joydev, 0, sizeof(struct joydev)); INIT_LIST_HEAD(&joydev->list); init_waitqueue_head(&joydev->wait); diff --git a/trunk/drivers/input/joystick/amijoy.c b/trunk/drivers/input/joystick/amijoy.c index ec55a29fc861..7249d324297b 100644 --- a/trunk/drivers/input/joystick/amijoy.c +++ b/trunk/drivers/input/joystick/amijoy.c @@ -36,6 +36,7 @@ #include #include #include +#include #include #include @@ -52,7 +53,7 @@ MODULE_PARM_DESC(map, "Map of attached joysticks in form of , (default is __obsolete_setup("amijoy="); static int amijoy_used; -static DECLARE_MUTEX(amijoy_sem); +static DEFINE_MUTEX(amijoy_mutex); static struct input_dev *amijoy_dev[2]; static char *amijoy_phys[2] = { "amijoy/input0", "amijoy/input1" }; @@ -85,7 +86,7 @@ static int amijoy_open(struct input_dev *dev) { int err; - err = down_interruptible(&amijoy_sem); + err = mutex_lock_interruptible(&amijoy_mutex); if (err) return err; @@ -97,16 +98,16 @@ static int amijoy_open(struct input_dev *dev) amijoy_used++; out: - up(&amijoy_sem); + mutex_unlock(&amijoy_mutex); return err; } static void amijoy_close(struct input_dev *dev) { - down(&amijoy_sem); + mutex_lock(&amijoy_mutex); if (!--amijoy_used) free_irq(IRQ_AMIGA_VERTB, amijoy_interrupt); - up(&amijoy_sem); + mutex_unlock(&amijoy_mutex); } static int __init amijoy_init(void) diff --git a/trunk/drivers/input/joystick/db9.c b/trunk/drivers/input/joystick/db9.c index dcffc34f30c3..e61894685cb1 100644 --- a/trunk/drivers/input/joystick/db9.c +++ b/trunk/drivers/input/joystick/db9.c @@ -38,6 +38,7 @@ #include #include #include +#include MODULE_AUTHOR("Vojtech Pavlik "); MODULE_DESCRIPTION("Atari, Amstrad, Commodore, Amiga, Sega, etc. joystick driver"); @@ -111,7 +112,7 @@ struct db9 { struct pardevice *pd; int mode; int used; - struct semaphore sem; + struct mutex mutex; char phys[DB9_MAX_DEVICES][32]; }; @@ -525,7 +526,7 @@ static int db9_open(struct input_dev *dev) struct parport *port = db9->pd->port; int err; - err = down_interruptible(&db9->sem); + err = mutex_lock_interruptible(&db9->mutex); if (err) return err; @@ -539,7 +540,7 @@ static int db9_open(struct input_dev *dev) mod_timer(&db9->timer, jiffies + DB9_REFRESH_TIME); } - up(&db9->sem); + mutex_unlock(&db9->mutex); return 0; } @@ -548,14 +549,14 @@ static void db9_close(struct input_dev *dev) struct db9 *db9 = dev->private; struct parport *port = db9->pd->port; - down(&db9->sem); + mutex_lock(&db9->mutex); if (!--db9->used) { del_timer_sync(&db9->timer); parport_write_control(port, 0x00); parport_data_forward(port); parport_release(db9->pd); } - up(&db9->sem); + mutex_unlock(&db9->mutex); } static struct db9 __init *db9_probe(int parport, int mode) @@ -603,7 +604,7 @@ static struct db9 __init *db9_probe(int parport, int mode) goto err_unreg_pardev; } - init_MUTEX(&db9->sem); + mutex_init(&db9->mutex); db9->pd = pd; db9->mode = mode; init_timer(&db9->timer); diff --git a/trunk/drivers/input/joystick/gamecon.c b/trunk/drivers/input/joystick/gamecon.c index 900587acdb47..ecbdb6b9bbd6 100644 --- a/trunk/drivers/input/joystick/gamecon.c +++ b/trunk/drivers/input/joystick/gamecon.c @@ -7,6 +7,7 @@ * Based on the work of: * Andree Borrmann John Dahlstrom * David Kuder Nathan Hand + * Raphael Assenat */ /* @@ -36,6 +37,7 @@ #include #include #include +#include MODULE_AUTHOR("Vojtech Pavlik "); MODULE_DESCRIPTION("NES, SNES, N64, MultiSystem, PSX gamepad driver"); @@ -72,8 +74,9 @@ __obsolete_setup("gc_3="); #define GC_N64 6 #define GC_PSX 7 #define GC_DDR 8 +#define GC_SNESMOUSE 9 -#define GC_MAX 8 +#define GC_MAX 9 #define GC_REFRESH_TIME HZ/100 @@ -83,7 +86,7 @@ struct gc { struct timer_list timer; unsigned char pads[GC_MAX + 1]; int used; - struct semaphore sem; + struct mutex mutex; char phys[GC_MAX_DEVICES][32]; }; @@ -93,7 +96,7 @@ static int gc_status_bit[] = { 0x40, 0x80, 0x20, 0x10, 0x08 }; static char *gc_names[] = { NULL, "SNES pad", "NES pad", "NES FourPort", "Multisystem joystick", "Multisystem 2-button joystick", "N64 controller", "PSX controller", - "PSX DDR controller" }; + "PSX DDR controller", "SNES mouse" }; /* * N64 support. */ @@ -205,9 +208,12 @@ static void gc_n64_process_packet(struct gc *gc) * NES/SNES support. */ -#define GC_NES_DELAY 6 /* Delay between bits - 6us */ -#define GC_NES_LENGTH 8 /* The NES pads use 8 bits of data */ -#define GC_SNES_LENGTH 12 /* The SNES true length is 16, but the last 4 bits are unused */ +#define GC_NES_DELAY 6 /* Delay between bits - 6us */ +#define GC_NES_LENGTH 8 /* The NES pads use 8 bits of data */ +#define GC_SNES_LENGTH 12 /* The SNES true length is 16, but the + last 4 bits are unused */ +#define GC_SNESMOUSE_LENGTH 32 /* The SNES mouse uses 32 bits, the first + 16 bits are equivalent to a gamepad */ #define GC_NES_POWER 0xfc #define GC_NES_CLOCK 0x01 @@ -242,11 +248,15 @@ static void gc_nes_read_packet(struct gc *gc, int length, unsigned char *data) static void gc_nes_process_packet(struct gc *gc) { - unsigned char data[GC_SNES_LENGTH]; + unsigned char data[GC_SNESMOUSE_LENGTH]; struct input_dev *dev; - int i, j, s; + int i, j, s, len; + char x_rel, y_rel; + + len = gc->pads[GC_SNESMOUSE] ? GC_SNESMOUSE_LENGTH : + (gc->pads[GC_SNES] ? GC_SNES_LENGTH : GC_NES_LENGTH); - gc_nes_read_packet(gc, gc->pads[GC_SNES] ? GC_SNES_LENGTH : GC_NES_LENGTH, data); + gc_nes_read_packet(gc, len, data); for (i = 0; i < GC_MAX_DEVICES; i++) { @@ -269,6 +279,44 @@ static void gc_nes_process_packet(struct gc *gc) for (j = 0; j < 8; j++) input_report_key(dev, gc_snes_btn[j], s & data[gc_snes_bytes[j]]); + if (s & gc->pads[GC_SNESMOUSE]) { + /* + * The 4 unused bits from SNES controllers appear to be ID bits + * so use them to make sure iwe are dealing with a mouse. + * gamepad is connected. This is important since + * my SNES gamepad sends 1's for bits 16-31, which + * cause the mouse pointer to quickly move to the + * upper left corner of the screen. + */ + if (!(s & data[12]) && !(s & data[13]) && + !(s & data[14]) && (s & data[15])) { + input_report_key(dev, BTN_LEFT, s & data[9]); + input_report_key(dev, BTN_RIGHT, s & data[8]); + + x_rel = y_rel = 0; + for (j = 0; j < 7; j++) { + x_rel <<= 1; + if (data[25 + j] & s) + x_rel |= 1; + + y_rel <<= 1; + if (data[17 + j] & s) + y_rel |= 1; + } + + if (x_rel) { + if (data[24] & s) + x_rel = -x_rel; + input_report_rel(dev, REL_X, x_rel); + } + + if (y_rel) { + if (data[16] & s) + y_rel = -y_rel; + input_report_rel(dev, REL_Y, y_rel); + } + } + } input_sync(dev); } } @@ -524,10 +572,10 @@ static void gc_timer(unsigned long private) gc_n64_process_packet(gc); /* - * NES and SNES pads + * NES and SNES pads or mouse */ - if (gc->pads[GC_NES] || gc->pads[GC_SNES]) + if (gc->pads[GC_NES] || gc->pads[GC_SNES] || gc->pads[GC_SNESMOUSE]) gc_nes_process_packet(gc); /* @@ -552,7 +600,7 @@ static int gc_open(struct input_dev *dev) struct gc *gc = dev->private; int err; - err = down_interruptible(&gc->sem); + err = mutex_lock_interruptible(&gc->mutex); if (err) return err; @@ -562,7 +610,7 @@ static int gc_open(struct input_dev *dev) mod_timer(&gc->timer, jiffies + GC_REFRESH_TIME); } - up(&gc->sem); + mutex_unlock(&gc->mutex); return 0; } @@ -570,13 +618,13 @@ static void gc_close(struct input_dev *dev) { struct gc *gc = dev->private; - down(&gc->sem); + mutex_lock(&gc->mutex); if (!--gc->used) { del_timer_sync(&gc->timer); parport_write_control(gc->pd->port, 0x00); parport_release(gc->pd); } - up(&gc->sem); + mutex_unlock(&gc->mutex); } static int __init gc_setup_pad(struct gc *gc, int idx, int pad_type) @@ -609,10 +657,13 @@ static int __init gc_setup_pad(struct gc *gc, int idx, int pad_type) input_dev->open = gc_open; input_dev->close = gc_close; - input_dev->evbit[0] = BIT(EV_KEY) | BIT(EV_ABS); + if (pad_type != GC_SNESMOUSE) { + input_dev->evbit[0] = BIT(EV_KEY) | BIT(EV_ABS); - for (i = 0; i < 2; i++) - input_set_abs_params(input_dev, ABS_X + i, -1, 1, 0, 0); + for (i = 0; i < 2; i++) + input_set_abs_params(input_dev, ABS_X + i, -1, 1, 0, 0); + } else + input_dev->evbit[0] = BIT(EV_KEY) | BIT(EV_REL); gc->pads[0] |= gc_status_bit[idx]; gc->pads[pad_type] |= gc_status_bit[idx]; @@ -630,6 +681,13 @@ static int __init gc_setup_pad(struct gc *gc, int idx, int pad_type) break; + case GC_SNESMOUSE: + set_bit(BTN_LEFT, input_dev->keybit); + set_bit(BTN_RIGHT, input_dev->keybit); + set_bit(REL_X, input_dev->relbit); + set_bit(REL_Y, input_dev->relbit); + break; + case GC_SNES: for (i = 4; i < 8; i++) set_bit(gc_snes_btn[i], input_dev->keybit); @@ -693,7 +751,7 @@ static struct gc __init *gc_probe(int parport, int *pads, int n_pads) goto err_unreg_pardev; } - init_MUTEX(&gc->sem); + mutex_init(&gc->mutex); gc->pd = pd; init_timer(&gc->timer); gc->timer.data = (long) gc; diff --git a/trunk/drivers/input/joystick/iforce/iforce-ff.c b/trunk/drivers/input/joystick/iforce/iforce-ff.c index 4678b6dab43b..2b8e8456c9fa 100644 --- a/trunk/drivers/input/joystick/iforce/iforce-ff.c +++ b/trunk/drivers/input/joystick/iforce/iforce-ff.c @@ -42,14 +42,14 @@ static int make_magnitude_modifier(struct iforce* iforce, unsigned char data[3]; if (!no_alloc) { - down(&iforce->mem_mutex); + mutex_lock(&iforce->mem_mutex); if (allocate_resource(&(iforce->device_memory), mod_chunk, 2, iforce->device_memory.start, iforce->device_memory.end, 2L, NULL, NULL)) { - up(&iforce->mem_mutex); + mutex_unlock(&iforce->mem_mutex); return -ENOMEM; } - up(&iforce->mem_mutex); + mutex_unlock(&iforce->mem_mutex); } data[0] = LO(mod_chunk->start); @@ -75,14 +75,14 @@ static int make_period_modifier(struct iforce* iforce, period = TIME_SCALE(period); if (!no_alloc) { - down(&iforce->mem_mutex); + mutex_lock(&iforce->mem_mutex); if (allocate_resource(&(iforce->device_memory), mod_chunk, 0x0c, iforce->device_memory.start, iforce->device_memory.end, 2L, NULL, NULL)) { - up(&iforce->mem_mutex); + mutex_unlock(&iforce->mem_mutex); return -ENOMEM; } - up(&iforce->mem_mutex); + mutex_unlock(&iforce->mem_mutex); } data[0] = LO(mod_chunk->start); @@ -115,14 +115,14 @@ static int make_envelope_modifier(struct iforce* iforce, fade_duration = TIME_SCALE(fade_duration); if (!no_alloc) { - down(&iforce->mem_mutex); + mutex_lock(&iforce->mem_mutex); if (allocate_resource(&(iforce->device_memory), mod_chunk, 0x0e, iforce->device_memory.start, iforce->device_memory.end, 2L, NULL, NULL)) { - up(&iforce->mem_mutex); + mutex_unlock(&iforce->mem_mutex); return -ENOMEM; } - up(&iforce->mem_mutex); + mutex_unlock(&iforce->mem_mutex); } data[0] = LO(mod_chunk->start); @@ -152,14 +152,14 @@ static int make_condition_modifier(struct iforce* iforce, unsigned char data[10]; if (!no_alloc) { - down(&iforce->mem_mutex); + mutex_lock(&iforce->mem_mutex); if (allocate_resource(&(iforce->device_memory), mod_chunk, 8, iforce->device_memory.start, iforce->device_memory.end, 2L, NULL, NULL)) { - up(&iforce->mem_mutex); + mutex_unlock(&iforce->mem_mutex); return -ENOMEM; } - up(&iforce->mem_mutex); + mutex_unlock(&iforce->mem_mutex); } data[0] = LO(mod_chunk->start); diff --git a/trunk/drivers/input/joystick/iforce/iforce-main.c b/trunk/drivers/input/joystick/iforce/iforce-main.c index b6bc04998047..ab0a26b924ca 100644 --- a/trunk/drivers/input/joystick/iforce/iforce-main.c +++ b/trunk/drivers/input/joystick/iforce/iforce-main.c @@ -350,7 +350,7 @@ int iforce_init_device(struct iforce *iforce) init_waitqueue_head(&iforce->wait); spin_lock_init(&iforce->xmit_lock); - init_MUTEX(&iforce->mem_mutex); + mutex_init(&iforce->mem_mutex); iforce->xmit.buf = iforce->xmit_data; iforce->dev = input_dev; diff --git a/trunk/drivers/input/joystick/iforce/iforce.h b/trunk/drivers/input/joystick/iforce/iforce.h index 146f406b8f8a..668f24535ba0 100644 --- a/trunk/drivers/input/joystick/iforce/iforce.h +++ b/trunk/drivers/input/joystick/iforce/iforce.h @@ -37,7 +37,7 @@ #include #include #include -#include +#include /* This module provides arbitrary resource management routines. * I use it to manage the device's memory. @@ -45,6 +45,7 @@ */ #include + #define IFORCE_MAX_LENGTH 16 /* iforce::bus */ @@ -146,7 +147,7 @@ struct iforce { wait_queue_head_t wait; struct resource device_memory; struct iforce_core_effect core_effects[FF_EFFECTS_MAX]; - struct semaphore mem_mutex; + struct mutex mem_mutex; }; /* Get hi and low bytes of a 16-bits int */ diff --git a/trunk/drivers/input/joystick/turbografx.c b/trunk/drivers/input/joystick/turbografx.c index b154938e88a4..5570fd5487c7 100644 --- a/trunk/drivers/input/joystick/turbografx.c +++ b/trunk/drivers/input/joystick/turbografx.c @@ -37,6 +37,7 @@ #include #include #include +#include MODULE_AUTHOR("Vojtech Pavlik "); MODULE_DESCRIPTION("TurboGraFX parallel port interface driver"); @@ -86,7 +87,7 @@ static struct tgfx { char phys[TGFX_MAX_DEVICES][32]; int sticks; int used; - struct semaphore sem; + struct mutex sem; } *tgfx_base[TGFX_MAX_PORTS]; /* @@ -128,7 +129,7 @@ static int tgfx_open(struct input_dev *dev) struct tgfx *tgfx = dev->private; int err; - err = down_interruptible(&tgfx->sem); + err = mutex_lock_interruptible(&tgfx->sem); if (err) return err; @@ -138,7 +139,7 @@ static int tgfx_open(struct input_dev *dev) mod_timer(&tgfx->timer, jiffies + TGFX_REFRESH_TIME); } - up(&tgfx->sem); + mutex_unlock(&tgfx->sem); return 0; } @@ -146,13 +147,13 @@ static void tgfx_close(struct input_dev *dev) { struct tgfx *tgfx = dev->private; - down(&tgfx->sem); + mutex_lock(&tgfx->sem); if (!--tgfx->used) { del_timer_sync(&tgfx->timer); parport_write_control(tgfx->pd->port, 0x00); parport_release(tgfx->pd); } - up(&tgfx->sem); + mutex_unlock(&tgfx->sem); } @@ -191,7 +192,7 @@ static struct tgfx __init *tgfx_probe(int parport, int *n_buttons, int n_devs) goto err_unreg_pardev; } - init_MUTEX(&tgfx->sem); + mutex_init(&tgfx->sem); tgfx->pd = pd; init_timer(&tgfx->timer); tgfx->timer.data = (long) tgfx; diff --git a/trunk/drivers/input/keyboard/Kconfig b/trunk/drivers/input/keyboard/Kconfig index 3b0ac3b43c54..a9dda56f62c4 100644 --- a/trunk/drivers/input/keyboard/Kconfig +++ b/trunk/drivers/input/keyboard/Kconfig @@ -13,7 +13,7 @@ menuconfig INPUT_KEYBOARD if INPUT_KEYBOARD config KEYBOARD_ATKBD - tristate "AT keyboard" if !X86_PC + tristate "AT keyboard" if EMBEDDED || !X86_PC default y select SERIO select SERIO_LIBPS2 diff --git a/trunk/drivers/input/keyboard/atkbd.c b/trunk/drivers/input/keyboard/atkbd.c index ffacf6eca5f5..fad04b66d268 100644 --- a/trunk/drivers/input/keyboard/atkbd.c +++ b/trunk/drivers/input/keyboard/atkbd.c @@ -27,6 +27,7 @@ #include #include #include +#include #define DRIVER_DESC "AT and PS/2 keyboard driver" @@ -216,7 +217,7 @@ struct atkbd { unsigned long time; struct work_struct event_work; - struct semaphore event_sem; + struct mutex event_mutex; unsigned long event_mask; }; @@ -302,19 +303,19 @@ static irqreturn_t atkbd_interrupt(struct serio *serio, unsigned char data, if (atkbd->translated) { if (atkbd->emul || - !(code == ATKBD_RET_EMUL0 || code == ATKBD_RET_EMUL1 || - code == ATKBD_RET_HANGUEL || code == ATKBD_RET_HANJA || - (code == ATKBD_RET_ERR && !atkbd->err_xl) || - (code == ATKBD_RET_BAT && !atkbd->bat_xl))) { + (code != ATKBD_RET_EMUL0 && code != ATKBD_RET_EMUL1 && + code != ATKBD_RET_HANGUEL && code != ATKBD_RET_HANJA && + (code != ATKBD_RET_ERR || atkbd->err_xl) && + (code != ATKBD_RET_BAT || atkbd->bat_xl))) { atkbd->release = code >> 7; code &= 0x7f; } if (!atkbd->emul) { if ((code & 0x7f) == (ATKBD_RET_BAT & 0x7f)) - atkbd->bat_xl = !atkbd->release; + atkbd->bat_xl = !(data >> 7); if ((code & 0x7f) == (ATKBD_RET_ERR & 0x7f)) - atkbd->err_xl = !atkbd->release; + atkbd->err_xl = !(data >> 7); } } @@ -449,7 +450,7 @@ static void atkbd_event_work(void *data) unsigned char param[2]; int i, j; - down(&atkbd->event_sem); + mutex_lock(&atkbd->event_mutex); if (test_and_clear_bit(ATKBD_LED_EVENT_BIT, &atkbd->event_mask)) { param[0] = (test_bit(LED_SCROLLL, dev->led) ? 1 : 0) @@ -480,7 +481,7 @@ static void atkbd_event_work(void *data) ps2_command(&atkbd->ps2dev, param, ATKBD_CMD_SETREP); } - up(&atkbd->event_sem); + mutex_unlock(&atkbd->event_mutex); } /* @@ -846,7 +847,7 @@ static int atkbd_connect(struct serio *serio, struct serio_driver *drv) atkbd->dev = dev; ps2_init(&atkbd->ps2dev, serio); INIT_WORK(&atkbd->event_work, atkbd_event_work, atkbd); - init_MUTEX(&atkbd->event_sem); + mutex_init(&atkbd->event_mutex); switch (serio->id.type) { @@ -862,9 +863,6 @@ static int atkbd_connect(struct serio *serio, struct serio_driver *drv) atkbd->softrepeat = atkbd_softrepeat; atkbd->scroll = atkbd_scroll; - if (!atkbd->write) - atkbd->softrepeat = 1; - if (atkbd->softrepeat) atkbd->softraw = 1; diff --git a/trunk/drivers/input/keyboard/corgikbd.c b/trunk/drivers/input/keyboard/corgikbd.c index e301ee4ca264..96c6bf77248a 100644 --- a/trunk/drivers/input/keyboard/corgikbd.c +++ b/trunk/drivers/input/keyboard/corgikbd.c @@ -29,11 +29,11 @@ #define KB_COLS 12 #define KB_ROWMASK(r) (1 << (r)) #define SCANCODE(r,c) ( ((r)<<4) + (c) + 1 ) -/* zero code, 124 scancodes + 3 hinge combinations */ -#define NR_SCANCODES ( SCANCODE(KB_ROWS-1,KB_COLS-1) +1 +1 +3 ) -#define SCAN_INTERVAL (HZ/10) +/* zero code, 124 scancodes */ +#define NR_SCANCODES ( SCANCODE(KB_ROWS-1,KB_COLS-1) +1 +1 ) -#define HINGE_SCAN_INTERVAL (HZ/4) +#define SCAN_INTERVAL (50) /* ms */ +#define HINGE_SCAN_INTERVAL (250) /* ms */ #define CORGI_KEY_CALENDER KEY_F1 #define CORGI_KEY_ADDRESS KEY_F2 @@ -49,9 +49,6 @@ #define CORGI_KEY_MAIL KEY_F10 #define CORGI_KEY_OK KEY_F11 #define CORGI_KEY_MENU KEY_F12 -#define CORGI_HINGE_0 KEY_KP0 -#define CORGI_HINGE_1 KEY_KP1 -#define CORGI_HINGE_2 KEY_KP2 static unsigned char corgikbd_keycode[NR_SCANCODES] = { 0, /* 0 */ @@ -63,7 +60,6 @@ static unsigned char corgikbd_keycode[NR_SCANCODES] = { CORGI_KEY_MAIL, KEY_Z, KEY_X, KEY_MINUS, KEY_SPACE, KEY_COMMA, 0, KEY_UP, 0, 0, 0, CORGI_KEY_FN, 0, 0, 0, 0, /* 81-96 */ KEY_SYSRQ, CORGI_KEY_JAP1, CORGI_KEY_JAP2, CORGI_KEY_CANCEL, CORGI_KEY_OK, CORGI_KEY_MENU, KEY_LEFT, KEY_DOWN, KEY_RIGHT, 0, 0, 0, 0, 0, 0, 0, /* 97-112 */ CORGI_KEY_OFF, CORGI_KEY_EXOK, CORGI_KEY_EXCANCEL, CORGI_KEY_EXJOGDOWN, CORGI_KEY_EXJOGUP, 0, 0, 0, 0, 0, 0, 0, /* 113-124 */ - CORGI_HINGE_0, CORGI_HINGE_1, CORGI_HINGE_2 /* 125-127 */ }; @@ -187,7 +183,7 @@ static void corgikbd_scankeyboard(struct corgikbd *corgikbd_data, struct pt_regs /* if any keys are pressed, enable the timer */ if (num_pressed) - mod_timer(&corgikbd_data->timer, jiffies + SCAN_INTERVAL); + mod_timer(&corgikbd_data->timer, jiffies + msecs_to_jiffies(SCAN_INTERVAL)); spin_unlock_irqrestore(&corgikbd_data->lock, flags); } @@ -228,6 +224,7 @@ static void corgikbd_timer_callback(unsigned long data) * 0x0c - Keyboard and Screen Closed */ +#define READ_GPIO_BIT(x) (GPLR(x) & GPIO_bit(x)) #define HINGE_STABLE_COUNT 2 static int sharpsl_hinge_state; static int hinge_count; @@ -239,6 +236,7 @@ static void corgikbd_hinge_timer(unsigned long data) unsigned long flags; gprr = read_scoop_reg(&corgiscoop_device.dev, SCOOP_GPRR) & (CORGI_SCP_SWA | CORGI_SCP_SWB); + gprr |= (READ_GPIO_BIT(CORGI_GPIO_AK_INT) != 0); if (gprr != sharpsl_hinge_state) { hinge_count = 0; sharpsl_hinge_state = gprr; @@ -249,27 +247,38 @@ static void corgikbd_hinge_timer(unsigned long data) input_report_switch(corgikbd_data->input, SW_0, ((sharpsl_hinge_state & CORGI_SCP_SWA) != 0)); input_report_switch(corgikbd_data->input, SW_1, ((sharpsl_hinge_state & CORGI_SCP_SWB) != 0)); + input_report_switch(corgikbd_data->input, SW_2, (READ_GPIO_BIT(CORGI_GPIO_AK_INT) != 0)); input_sync(corgikbd_data->input); spin_unlock_irqrestore(&corgikbd_data->lock, flags); } } - mod_timer(&corgikbd_data->htimer, jiffies + HINGE_SCAN_INTERVAL); + mod_timer(&corgikbd_data->htimer, jiffies + msecs_to_jiffies(HINGE_SCAN_INTERVAL)); } #ifdef CONFIG_PM static int corgikbd_suspend(struct platform_device *dev, pm_message_t state) { + int i; struct corgikbd *corgikbd = platform_get_drvdata(dev); + corgikbd->suspended = 1; + /* strobe 0 is the power key so this can't be made an input for + powersaving therefore i = 1 */ + for (i = 1; i < CORGI_KEY_STROBE_NUM; i++) + pxa_gpio_mode(CORGI_GPIO_KEY_STROBE(i) | GPIO_IN); return 0; } static int corgikbd_resume(struct platform_device *dev) { + int i; struct corgikbd *corgikbd = platform_get_drvdata(dev); + for (i = 1; i < CORGI_KEY_STROBE_NUM; i++) + pxa_gpio_mode(CORGI_GPIO_KEY_STROBE(i) | GPIO_OUT | GPIO_DFLT_HIGH); + /* Upon resume, ignore the suspend key for a short while */ corgikbd->suspend_jiffies=jiffies; corgikbd->suspended = 0; @@ -333,10 +342,11 @@ static int __init corgikbd_probe(struct platform_device *pdev) clear_bit(0, input_dev->keybit); set_bit(SW_0, input_dev->swbit); set_bit(SW_1, input_dev->swbit); + set_bit(SW_2, input_dev->swbit); input_register_device(corgikbd->input); - mod_timer(&corgikbd->htimer, jiffies + HINGE_SCAN_INTERVAL); + mod_timer(&corgikbd->htimer, jiffies + msecs_to_jiffies(HINGE_SCAN_INTERVAL)); /* Setup sense interrupts - RisingEdge Detect, sense lines as inputs */ for (i = 0; i < CORGI_KEY_SENSE_NUM; i++) { @@ -351,6 +361,9 @@ static int __init corgikbd_probe(struct platform_device *pdev) for (i = 0; i < CORGI_KEY_STROBE_NUM; i++) pxa_gpio_mode(CORGI_GPIO_KEY_STROBE(i) | GPIO_OUT | GPIO_DFLT_HIGH); + /* Setup the headphone jack as an input */ + pxa_gpio_mode(CORGI_GPIO_AK_INT | GPIO_IN); + return 0; } diff --git a/trunk/drivers/input/keyboard/hil_kbd.c b/trunk/drivers/input/keyboard/hil_kbd.c index 63f387e4b783..1dca3cf42a54 100644 --- a/trunk/drivers/input/keyboard/hil_kbd.c +++ b/trunk/drivers/input/keyboard/hil_kbd.c @@ -250,16 +250,19 @@ static int hil_kbd_connect(struct serio *serio, struct serio_driver *drv) struct hil_kbd *kbd; uint8_t did, *idd; int i; - + kbd = kzalloc(sizeof(*kbd), GFP_KERNEL); if (!kbd) return -ENOMEM; kbd->dev = input_allocate_device(); - if (!kbd->dev) goto bail1; + if (!kbd->dev) + goto bail0; + kbd->dev->private = kbd; - if (serio_open(serio, drv)) goto bail0; + if (serio_open(serio, drv)) + goto bail1; serio_set_drvdata(serio, kbd); kbd->serio = serio; diff --git a/trunk/drivers/input/keyboard/spitzkbd.c b/trunk/drivers/input/keyboard/spitzkbd.c index 83999d583122..bc61cf8cfc65 100644 --- a/trunk/drivers/input/keyboard/spitzkbd.c +++ b/trunk/drivers/input/keyboard/spitzkbd.c @@ -30,6 +30,7 @@ #define SCANCODE(r,c) (((r)<<4) + (c) + 1) #define NR_SCANCODES ((KB_ROWS<<4) + 1) +#define SCAN_INTERVAL (50) /* ms */ #define HINGE_SCAN_INTERVAL (150) /* ms */ #define SPITZ_KEY_CALENDER KEY_F1 @@ -230,7 +231,7 @@ static void spitzkbd_scankeyboard(struct spitzkbd *spitzkbd_data, struct pt_regs /* if any keys are pressed, enable the timer */ if (num_pressed) - mod_timer(&spitzkbd_data->timer, jiffies + msecs_to_jiffies(100)); + mod_timer(&spitzkbd_data->timer, jiffies + msecs_to_jiffies(SCAN_INTERVAL)); spin_unlock_irqrestore(&spitzkbd_data->lock, flags); } @@ -287,6 +288,7 @@ static void spitzkbd_hinge_timer(unsigned long data) unsigned long flags; state = GPLR(SPITZ_GPIO_SWA) & (GPIO_bit(SPITZ_GPIO_SWA)|GPIO_bit(SPITZ_GPIO_SWB)); + state |= (GPLR(SPITZ_GPIO_AK_INT) & GPIO_bit(SPITZ_GPIO_AK_INT)); if (state != sharpsl_hinge_state) { hinge_count = 0; sharpsl_hinge_state = state; @@ -299,6 +301,7 @@ static void spitzkbd_hinge_timer(unsigned long data) input_report_switch(spitzkbd_data->input, SW_0, ((GPLR(SPITZ_GPIO_SWA) & GPIO_bit(SPITZ_GPIO_SWA)) != 0)); input_report_switch(spitzkbd_data->input, SW_1, ((GPLR(SPITZ_GPIO_SWB) & GPIO_bit(SPITZ_GPIO_SWB)) != 0)); + input_report_switch(spitzkbd_data->input, SW_2, ((GPLR(SPITZ_GPIO_AK_INT) & GPIO_bit(SPITZ_GPIO_AK_INT)) != 0)); input_sync(spitzkbd_data->input); spin_unlock_irqrestore(&spitzkbd_data->lock, flags); @@ -397,6 +400,7 @@ static int __init spitzkbd_probe(struct platform_device *dev) clear_bit(0, input_dev->keybit); set_bit(SW_0, input_dev->swbit); set_bit(SW_1, input_dev->swbit); + set_bit(SW_2, input_dev->swbit); input_register_device(input_dev); @@ -432,6 +436,9 @@ static int __init spitzkbd_probe(struct platform_device *dev) request_irq(SPITZ_IRQ_GPIO_SWB, spitzkbd_hinge_isr, SA_INTERRUPT | SA_TRIGGER_RISING | SA_TRIGGER_FALLING, "Spitzkbd SWB", spitzkbd); + request_irq(SPITZ_IRQ_GPIO_AK_INT, spitzkbd_hinge_isr, + SA_INTERRUPT | SA_TRIGGER_RISING | SA_TRIGGER_FALLING, + "Spitzkbd HP", spitzkbd); printk(KERN_INFO "input: Spitz Keyboard Registered\n"); @@ -450,6 +457,7 @@ static int spitzkbd_remove(struct platform_device *dev) free_irq(SPITZ_IRQ_GPIO_ON_KEY, spitzkbd); free_irq(SPITZ_IRQ_GPIO_SWA, spitzkbd); free_irq(SPITZ_IRQ_GPIO_SWB, spitzkbd); + free_irq(SPITZ_IRQ_GPIO_AK_INT, spitzkbd); del_timer_sync(&spitzkbd->htimer); del_timer_sync(&spitzkbd->timer); diff --git a/trunk/drivers/input/misc/pcspkr.c b/trunk/drivers/input/misc/pcspkr.c index 1ef477f4469c..afd322185bbf 100644 --- a/trunk/drivers/input/misc/pcspkr.c +++ b/trunk/drivers/input/misc/pcspkr.c @@ -24,7 +24,6 @@ MODULE_AUTHOR("Vojtech Pavlik "); MODULE_DESCRIPTION("PC Speaker beeper driver"); MODULE_LICENSE("GPL"); -static struct platform_device *pcspkr_platform_device; static DEFINE_SPINLOCK(i8253_beep_lock); static int pcspkr_event(struct input_dev *dev, unsigned int type, unsigned int code, int value) @@ -135,35 +134,11 @@ static struct platform_driver pcspkr_platform_driver = { static int __init pcspkr_init(void) { - int err; - - err = platform_driver_register(&pcspkr_platform_driver); - if (err) - return err; - - pcspkr_platform_device = platform_device_alloc("pcspkr", -1); - if (!pcspkr_platform_device) { - err = -ENOMEM; - goto err_unregister_driver; - } - - err = platform_device_add(pcspkr_platform_device); - if (err) - goto err_free_device; - - return 0; - - err_free_device: - platform_device_put(pcspkr_platform_device); - err_unregister_driver: - platform_driver_unregister(&pcspkr_platform_driver); - - return err; + return platform_driver_register(&pcspkr_platform_driver); } static void __exit pcspkr_exit(void) { - platform_device_unregister(pcspkr_platform_device); platform_driver_unregister(&pcspkr_platform_driver); } diff --git a/trunk/drivers/input/misc/uinput.c b/trunk/drivers/input/misc/uinput.c index 546ed9b4901d..d723e9ad7c41 100644 --- a/trunk/drivers/input/misc/uinput.c +++ b/trunk/drivers/input/misc/uinput.c @@ -194,7 +194,7 @@ static int uinput_open(struct inode *inode, struct file *file) if (!newdev) return -ENOMEM; - init_MUTEX(&newdev->sem); + mutex_init(&newdev->mutex); spin_lock_init(&newdev->requests_lock); init_waitqueue_head(&newdev->requests_waitq); init_waitqueue_head(&newdev->waitq); @@ -340,7 +340,7 @@ static ssize_t uinput_write(struct file *file, const char __user *buffer, size_t struct uinput_device *udev = file->private_data; int retval; - retval = down_interruptible(&udev->sem); + retval = mutex_lock_interruptible(&udev->mutex); if (retval) return retval; @@ -348,7 +348,7 @@ static ssize_t uinput_write(struct file *file, const char __user *buffer, size_t uinput_inject_event(udev, buffer, count) : uinput_setup_device(udev, buffer, count); - up(&udev->sem); + mutex_unlock(&udev->mutex); return retval; } @@ -369,7 +369,7 @@ static ssize_t uinput_read(struct file *file, char __user *buffer, size_t count, if (retval) return retval; - retval = down_interruptible(&udev->sem); + retval = mutex_lock_interruptible(&udev->mutex); if (retval) return retval; @@ -388,7 +388,7 @@ static ssize_t uinput_read(struct file *file, char __user *buffer, size_t count, } out: - up(&udev->sem); + mutex_unlock(&udev->mutex); return retval; } @@ -439,7 +439,7 @@ static long uinput_ioctl(struct file *file, unsigned int cmd, unsigned long arg) udev = file->private_data; - retval = down_interruptible(&udev->sem); + retval = mutex_lock_interruptible(&udev->mutex); if (retval) return retval; @@ -589,7 +589,7 @@ static long uinput_ioctl(struct file *file, unsigned int cmd, unsigned long arg) } out: - up(&udev->sem); + mutex_unlock(&udev->mutex); return retval; } diff --git a/trunk/drivers/input/mouse/hil_ptr.c b/trunk/drivers/input/mouse/hil_ptr.c index bfb564fd8fe2..69f02178c528 100644 --- a/trunk/drivers/input/mouse/hil_ptr.c +++ b/trunk/drivers/input/mouse/hil_ptr.c @@ -249,10 +249,13 @@ static int hil_ptr_connect(struct serio *serio, struct serio_driver *driver) return -ENOMEM; ptr->dev = input_allocate_device(); - if (!ptr->dev) goto bail0; + if (!ptr->dev) + goto bail0; + ptr->dev->private = ptr; - if (serio_open(serio, driver)) goto bail1; + if (serio_open(serio, driver)) + goto bail1; serio_set_drvdata(serio, ptr); ptr->serio = serio; diff --git a/trunk/drivers/input/mouse/psmouse-base.c b/trunk/drivers/input/mouse/psmouse-base.c index ad6217467676..32d70ed8f41d 100644 --- a/trunk/drivers/input/mouse/psmouse-base.c +++ b/trunk/drivers/input/mouse/psmouse-base.c @@ -20,6 +20,8 @@ #include #include #include +#include + #include "psmouse.h" #include "synaptics.h" #include "logips2pp.h" @@ -98,13 +100,13 @@ __obsolete_setup("psmouse_resetafter="); __obsolete_setup("psmouse_rate="); /* - * psmouse_sem protects all operations changing state of mouse + * psmouse_mutex protects all operations changing state of mouse * (connecting, disconnecting, changing rate or resolution via * sysfs). We could use a per-device semaphore but since there * rarely more than one PS/2 mouse connected and since semaphore * is taken in "slow" paths it is not worth it. */ -static DECLARE_MUTEX(psmouse_sem); +static DEFINE_MUTEX(psmouse_mutex); static struct workqueue_struct *kpsmoused_wq; @@ -868,7 +870,7 @@ static void psmouse_resync(void *p) int failed = 0, enabled = 0; int i; - down(&psmouse_sem); + mutex_lock(&psmouse_mutex); if (psmouse->state != PSMOUSE_RESYNCING) goto out; @@ -948,7 +950,7 @@ static void psmouse_resync(void *p) if (parent) psmouse_activate(parent); out: - up(&psmouse_sem); + mutex_unlock(&psmouse_mutex); } /* @@ -974,14 +976,14 @@ static void psmouse_disconnect(struct serio *serio) sysfs_remove_group(&serio->dev.kobj, &psmouse_attribute_group); - down(&psmouse_sem); + mutex_lock(&psmouse_mutex); psmouse_set_state(psmouse, PSMOUSE_CMD_MODE); /* make sure we don't have a resync in progress */ - up(&psmouse_sem); + mutex_unlock(&psmouse_mutex); flush_workqueue(kpsmoused_wq); - down(&psmouse_sem); + mutex_lock(&psmouse_mutex); if (serio->parent && serio->id.type == SERIO_PS_PSTHRU) { parent = serio_get_drvdata(serio->parent); @@ -1004,7 +1006,7 @@ static void psmouse_disconnect(struct serio *serio) if (parent) psmouse_activate(parent); - up(&psmouse_sem); + mutex_unlock(&psmouse_mutex); } static int psmouse_switch_protocol(struct psmouse *psmouse, struct psmouse_protocol *proto) @@ -1076,7 +1078,7 @@ static int psmouse_connect(struct serio *serio, struct serio_driver *drv) struct input_dev *input_dev; int retval = -ENOMEM; - down(&psmouse_sem); + mutex_lock(&psmouse_mutex); /* * If this is a pass-through port deactivate parent so the device @@ -1144,7 +1146,7 @@ static int psmouse_connect(struct serio *serio, struct serio_driver *drv) if (parent) psmouse_activate(parent); - up(&psmouse_sem); + mutex_unlock(&psmouse_mutex); return retval; } @@ -1161,7 +1163,7 @@ static int psmouse_reconnect(struct serio *serio) return -1; } - down(&psmouse_sem); + mutex_lock(&psmouse_mutex); if (serio->parent && serio->id.type == SERIO_PS_PSTHRU) { parent = serio_get_drvdata(serio->parent); @@ -1195,7 +1197,7 @@ static int psmouse_reconnect(struct serio *serio) if (parent) psmouse_activate(parent); - up(&psmouse_sem); + mutex_unlock(&psmouse_mutex); return rc; } @@ -1273,7 +1275,7 @@ ssize_t psmouse_attr_set_helper(struct device *dev, struct device_attribute *dev goto out_unpin; } - retval = down_interruptible(&psmouse_sem); + retval = mutex_lock_interruptible(&psmouse_mutex); if (retval) goto out_unpin; @@ -1281,7 +1283,7 @@ ssize_t psmouse_attr_set_helper(struct device *dev, struct device_attribute *dev if (psmouse->state == PSMOUSE_IGNORE) { retval = -ENODEV; - goto out_up; + goto out_unlock; } if (serio->parent && serio->id.type == SERIO_PS_PSTHRU) { @@ -1299,8 +1301,8 @@ ssize_t psmouse_attr_set_helper(struct device *dev, struct device_attribute *dev if (parent) psmouse_activate(parent); - out_up: - up(&psmouse_sem); + out_unlock: + mutex_unlock(&psmouse_mutex); out_unpin: serio_unpin_driver(serio); return retval; @@ -1357,11 +1359,11 @@ static ssize_t psmouse_attr_set_protocol(struct psmouse *psmouse, void *data, co return -EIO; } - up(&psmouse_sem); + mutex_unlock(&psmouse_mutex); serio_unpin_driver(serio); serio_unregister_child_port(serio); serio_pin_driver_uninterruptible(serio); - down(&psmouse_sem); + mutex_lock(&psmouse_mutex); if (serio->drv != &psmouse_drv) { input_free_device(new_dev); diff --git a/trunk/drivers/input/mouse/synaptics.c b/trunk/drivers/input/mouse/synaptics.c index 2051bec2c394..ad5d0a85e960 100644 --- a/trunk/drivers/input/mouse/synaptics.c +++ b/trunk/drivers/input/mouse/synaptics.c @@ -247,14 +247,12 @@ static void synaptics_pt_create(struct psmouse *psmouse) { struct serio *serio; - serio = kmalloc(sizeof(struct serio), GFP_KERNEL); + serio = kzalloc(sizeof(struct serio), GFP_KERNEL); if (!serio) { printk(KERN_ERR "synaptics: not enough memory to allocate pass-through port\n"); return; } - memset(serio, 0, sizeof(struct serio)); - serio->id.type = SERIO_PS_PSTHRU; strlcpy(serio->name, "Synaptics pass-through", sizeof(serio->name)); strlcpy(serio->phys, "synaptics-pt/serio0", sizeof(serio->name)); @@ -605,14 +603,21 @@ static struct dmi_system_id toshiba_dmi_table[] = { .ident = "Toshiba Satellite", .matches = { DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"), - DMI_MATCH(DMI_PRODUCT_NAME , "Satellite"), + DMI_MATCH(DMI_PRODUCT_NAME, "Satellite"), }, }, { .ident = "Toshiba Dynabook", .matches = { DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"), - DMI_MATCH(DMI_PRODUCT_NAME , "dynabook"), + DMI_MATCH(DMI_PRODUCT_NAME, "dynabook"), + }, + }, + { + .ident = "Toshiba Portege M300", + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"), + DMI_MATCH(DMI_PRODUCT_NAME, "PORTEGE M300"), }, }, { } @@ -623,10 +628,9 @@ int synaptics_init(struct psmouse *psmouse) { struct synaptics_data *priv; - psmouse->private = priv = kmalloc(sizeof(struct synaptics_data), GFP_KERNEL); + psmouse->private = priv = kzalloc(sizeof(struct synaptics_data), GFP_KERNEL); if (!priv) return -1; - memset(priv, 0, sizeof(struct synaptics_data)); if (synaptics_query_hardware(psmouse)) { printk(KERN_ERR "Unable to query Synaptics hardware.\n"); diff --git a/trunk/drivers/input/mousedev.c b/trunk/drivers/input/mousedev.c index 9abed18d2ecf..b685a507955d 100644 --- a/trunk/drivers/input/mousedev.c +++ b/trunk/drivers/input/mousedev.c @@ -412,9 +412,8 @@ static int mousedev_open(struct inode * inode, struct file * file) if (i >= MOUSEDEV_MINORS || !mousedev_table[i]) return -ENODEV; - if (!(list = kmalloc(sizeof(struct mousedev_list), GFP_KERNEL))) + if (!(list = kzalloc(sizeof(struct mousedev_list), GFP_KERNEL))) return -ENOMEM; - memset(list, 0, sizeof(struct mousedev_list)); spin_lock_init(&list->packet_lock); list->pos_x = xres / 2; @@ -626,9 +625,8 @@ static struct input_handle *mousedev_connect(struct input_handler *handler, stru return NULL; } - if (!(mousedev = kmalloc(sizeof(struct mousedev), GFP_KERNEL))) + if (!(mousedev = kzalloc(sizeof(struct mousedev), GFP_KERNEL))) return NULL; - memset(mousedev, 0, sizeof(struct mousedev)); INIT_LIST_HEAD(&mousedev->list); init_waitqueue_head(&mousedev->wait); diff --git a/trunk/drivers/input/power.c b/trunk/drivers/input/power.c index bfc5c63ebffe..526e6070600c 100644 --- a/trunk/drivers/input/power.c +++ b/trunk/drivers/input/power.c @@ -103,9 +103,8 @@ static struct input_handle *power_connect(struct input_handler *handler, { struct input_handle *handle; - if (!(handle = kmalloc(sizeof(struct input_handle), GFP_KERNEL))) + if (!(handle = kzalloc(sizeof(struct input_handle), GFP_KERNEL))) return NULL; - memset(handle, 0, sizeof(struct input_handle)); handle->dev = dev; handle->handler = handler; diff --git a/trunk/drivers/input/serio/hil_mlc.c b/trunk/drivers/input/serio/hil_mlc.c index ea499783fb12..bbbe15e21904 100644 --- a/trunk/drivers/input/serio/hil_mlc.c +++ b/trunk/drivers/input/serio/hil_mlc.c @@ -872,9 +872,8 @@ int hil_mlc_register(hil_mlc *mlc) { for (i = 0; i < HIL_MLC_DEVMEM; i++) { struct serio *mlc_serio; hil_mlc_copy_di_scratch(mlc, i); - mlc_serio = kmalloc(sizeof(*mlc_serio), GFP_KERNEL); + mlc_serio = kzalloc(sizeof(*mlc_serio), GFP_KERNEL); mlc->serio[i] = mlc_serio; - memset(mlc_serio, 0, sizeof(*mlc_serio)); mlc_serio->id = hil_mlc_serio_id; mlc_serio->write = hil_mlc_serio_write; mlc_serio->open = hil_mlc_serio_open; diff --git a/trunk/drivers/input/serio/i8042-x86ia64io.h b/trunk/drivers/input/serio/i8042-x86ia64io.h index a4c6f3522723..f606e96bc2f4 100644 --- a/trunk/drivers/input/serio/i8042-x86ia64io.h +++ b/trunk/drivers/input/serio/i8042-x86ia64io.h @@ -192,7 +192,9 @@ static struct dmi_system_id __initdata i8042_dmi_nomux_table[] = { #include static int i8042_pnp_kbd_registered; +static unsigned int i8042_pnp_kbd_devices; static int i8042_pnp_aux_registered; +static unsigned int i8042_pnp_aux_devices; static int i8042_pnp_command_reg; static int i8042_pnp_data_reg; @@ -219,6 +221,7 @@ static int i8042_pnp_kbd_probe(struct pnp_dev *dev, const struct pnp_device_id * strncat(i8042_pnp_kbd_name, pnp_dev_name(dev), sizeof(i8042_pnp_kbd_name)); } + i8042_pnp_kbd_devices++; return 0; } @@ -239,6 +242,7 @@ static int i8042_pnp_aux_probe(struct pnp_dev *dev, const struct pnp_device_id * strncat(i8042_pnp_aux_name, pnp_dev_name(dev), sizeof(i8042_pnp_aux_name)); } + i8042_pnp_aux_devices++; return 0; } @@ -287,21 +291,23 @@ static void i8042_pnp_exit(void) static int __init i8042_pnp_init(void) { - int result_kbd = 0, result_aux = 0; char kbd_irq_str[4] = { 0 }, aux_irq_str[4] = { 0 }; + int err; if (i8042_nopnp) { printk(KERN_INFO "i8042: PNP detection disabled\n"); return 0; } - if ((result_kbd = pnp_register_driver(&i8042_pnp_kbd_driver)) >= 0) + err = pnp_register_driver(&i8042_pnp_kbd_driver); + if (!err) i8042_pnp_kbd_registered = 1; - if ((result_aux = pnp_register_driver(&i8042_pnp_aux_driver)) >= 0) + err = pnp_register_driver(&i8042_pnp_aux_driver); + if (!err) i8042_pnp_aux_registered = 1; - if (result_kbd <= 0 && result_aux <= 0) { + if (!i8042_pnp_kbd_devices && !i8042_pnp_aux_devices) { i8042_pnp_exit(); #if defined(__ia64__) return -ENODEV; @@ -311,24 +317,24 @@ static int __init i8042_pnp_init(void) #endif } - if (result_kbd > 0) + if (i8042_pnp_kbd_devices) snprintf(kbd_irq_str, sizeof(kbd_irq_str), "%d", i8042_pnp_kbd_irq); - if (result_aux > 0) + if (i8042_pnp_aux_devices) snprintf(aux_irq_str, sizeof(aux_irq_str), "%d", i8042_pnp_aux_irq); printk(KERN_INFO "PNP: PS/2 Controller [%s%s%s] at %#x,%#x irq %s%s%s\n", - i8042_pnp_kbd_name, (result_kbd > 0 && result_aux > 0) ? "," : "", + i8042_pnp_kbd_name, (i8042_pnp_kbd_devices && i8042_pnp_aux_devices) ? "," : "", i8042_pnp_aux_name, i8042_pnp_data_reg, i8042_pnp_command_reg, - kbd_irq_str, (result_kbd > 0 && result_aux > 0) ? "," : "", + kbd_irq_str, (i8042_pnp_kbd_devices && i8042_pnp_aux_devices) ? "," : "", aux_irq_str); #if defined(__ia64__) - if (result_kbd <= 0) + if (!i8042_pnp_kbd_devices) i8042_nokbd = 1; - if (result_aux <= 0) + if (!i8042_pnp_aux_devices) i8042_noaux = 1; #endif diff --git a/trunk/drivers/input/serio/libps2.c b/trunk/drivers/input/serio/libps2.c index d4c990f7c85e..79c97f94bcbd 100644 --- a/trunk/drivers/input/serio/libps2.c +++ b/trunk/drivers/input/serio/libps2.c @@ -84,7 +84,7 @@ void ps2_drain(struct ps2dev *ps2dev, int maxbytes, int timeout) maxbytes = sizeof(ps2dev->cmdbuf); } - down(&ps2dev->cmd_sem); + mutex_lock(&ps2dev->cmd_mutex); serio_pause_rx(ps2dev->serio); ps2dev->flags = PS2_FLAG_CMD; @@ -94,7 +94,7 @@ void ps2_drain(struct ps2dev *ps2dev, int maxbytes, int timeout) wait_event_timeout(ps2dev->wait, !(ps2dev->flags & PS2_FLAG_CMD), msecs_to_jiffies(timeout)); - up(&ps2dev->cmd_sem); + mutex_unlock(&ps2dev->cmd_mutex); } /* @@ -177,7 +177,7 @@ int ps2_command(struct ps2dev *ps2dev, unsigned char *param, int command) return -1; } - down(&ps2dev->cmd_sem); + mutex_lock(&ps2dev->cmd_mutex); serio_pause_rx(ps2dev->serio); ps2dev->flags = command == PS2_CMD_GETID ? PS2_FLAG_WAITID : 0; @@ -229,7 +229,7 @@ int ps2_command(struct ps2dev *ps2dev, unsigned char *param, int command) ps2dev->flags = 0; serio_continue_rx(ps2dev->serio); - up(&ps2dev->cmd_sem); + mutex_unlock(&ps2dev->cmd_mutex); return rc; } @@ -281,7 +281,7 @@ int ps2_schedule_command(struct ps2dev *ps2dev, unsigned char *param, int comman void ps2_init(struct ps2dev *ps2dev, struct serio *serio) { - init_MUTEX(&ps2dev->cmd_sem); + mutex_init(&ps2dev->cmd_mutex); init_waitqueue_head(&ps2dev->wait); ps2dev->serio = serio; } diff --git a/trunk/drivers/input/serio/parkbd.c b/trunk/drivers/input/serio/parkbd.c index 1d15c2819818..a5c1fb3a4a51 100644 --- a/trunk/drivers/input/serio/parkbd.c +++ b/trunk/drivers/input/serio/parkbd.c @@ -171,9 +171,8 @@ static struct serio * __init parkbd_allocate_serio(void) { struct serio *serio; - serio = kmalloc(sizeof(struct serio), GFP_KERNEL); + serio = kzalloc(sizeof(struct serio), GFP_KERNEL); if (serio) { - memset(serio, 0, sizeof(struct serio)); serio->id.type = parkbd_mode; serio->write = parkbd_write, strlcpy(serio->name, "PARKBD AT/XT keyboard adapter", sizeof(serio->name)); diff --git a/trunk/drivers/input/serio/rpckbd.c b/trunk/drivers/input/serio/rpckbd.c index a3bd11589bc3..513d37fc1acf 100644 --- a/trunk/drivers/input/serio/rpckbd.c +++ b/trunk/drivers/input/serio/rpckbd.c @@ -111,11 +111,10 @@ static int __devinit rpckbd_probe(struct platform_device *dev) { struct serio *serio; - serio = kmalloc(sizeof(struct serio), GFP_KERNEL); + serio = kzalloc(sizeof(struct serio), GFP_KERNEL); if (!serio) return -ENOMEM; - memset(serio, 0, sizeof(struct serio)); serio->id.type = SERIO_8042; serio->write = rpckbd_write; serio->open = rpckbd_open; diff --git a/trunk/drivers/input/serio/serio.c b/trunk/drivers/input/serio/serio.c index 2f76813c3a64..6521034bc933 100644 --- a/trunk/drivers/input/serio/serio.c +++ b/trunk/drivers/input/serio/serio.c @@ -34,6 +34,7 @@ #include #include #include +#include MODULE_AUTHOR("Vojtech Pavlik "); MODULE_DESCRIPTION("Serio abstraction core"); @@ -52,10 +53,10 @@ EXPORT_SYMBOL(serio_rescan); EXPORT_SYMBOL(serio_reconnect); /* - * serio_sem protects entire serio subsystem and is taken every time + * serio_mutex protects entire serio subsystem and is taken every time * serio port or driver registrered or unregistered. */ -static DECLARE_MUTEX(serio_sem); +static DEFINE_MUTEX(serio_mutex); static LIST_HEAD(serio_list); @@ -70,9 +71,9 @@ static int serio_connect_driver(struct serio *serio, struct serio_driver *drv) { int retval; - down(&serio->drv_sem); + mutex_lock(&serio->drv_mutex); retval = drv->connect(serio, drv); - up(&serio->drv_sem); + mutex_unlock(&serio->drv_mutex); return retval; } @@ -81,20 +82,20 @@ static int serio_reconnect_driver(struct serio *serio) { int retval = -1; - down(&serio->drv_sem); + mutex_lock(&serio->drv_mutex); if (serio->drv && serio->drv->reconnect) retval = serio->drv->reconnect(serio); - up(&serio->drv_sem); + mutex_unlock(&serio->drv_mutex); return retval; } static void serio_disconnect_driver(struct serio *serio) { - down(&serio->drv_sem); + mutex_lock(&serio->drv_mutex); if (serio->drv) serio->drv->disconnect(serio); - up(&serio->drv_sem); + mutex_unlock(&serio->drv_mutex); } static int serio_match_port(const struct serio_device_id *ids, struct serio *serio) @@ -195,6 +196,7 @@ static void serio_queue_event(void *object, struct module *owner, if ((event = kmalloc(sizeof(struct serio_event), GFP_ATOMIC))) { if (!try_module_get(owner)) { printk(KERN_WARNING "serio: Can't get module reference, dropping event %d\n", event_type); + kfree(event); goto out; } @@ -272,7 +274,7 @@ static void serio_handle_event(void) struct serio_event *event; struct serio_driver *serio_drv; - down(&serio_sem); + mutex_lock(&serio_mutex); /* * Note that we handle only one event here to give swsusp @@ -314,7 +316,7 @@ static void serio_handle_event(void) serio_free_event(event); } - up(&serio_sem); + mutex_unlock(&serio_mutex); } /* @@ -449,7 +451,7 @@ static ssize_t serio_rebind_driver(struct device *dev, struct device_attribute * struct device_driver *drv; int retval; - retval = down_interruptible(&serio_sem); + retval = mutex_lock_interruptible(&serio_mutex); if (retval) return retval; @@ -469,7 +471,7 @@ static ssize_t serio_rebind_driver(struct device *dev, struct device_attribute * retval = -EINVAL; } - up(&serio_sem); + mutex_unlock(&serio_mutex); return retval; } @@ -524,7 +526,7 @@ static void serio_init_port(struct serio *serio) __module_get(THIS_MODULE); spin_lock_init(&serio->lock); - init_MUTEX(&serio->drv_sem); + mutex_init(&serio->drv_mutex); device_initialize(&serio->dev); snprintf(serio->dev.bus_id, sizeof(serio->dev.bus_id), "serio%ld", (long)atomic_inc_return(&serio_no) - 1); @@ -661,10 +663,10 @@ void __serio_register_port(struct serio *serio, struct module *owner) */ void serio_unregister_port(struct serio *serio) { - down(&serio_sem); + mutex_lock(&serio_mutex); serio_disconnect_port(serio); serio_destroy_port(serio); - up(&serio_sem); + mutex_unlock(&serio_mutex); } /* @@ -672,17 +674,17 @@ void serio_unregister_port(struct serio *serio) */ void serio_unregister_child_port(struct serio *serio) { - down(&serio_sem); + mutex_lock(&serio_mutex); if (serio->child) { serio_disconnect_port(serio->child); serio_destroy_port(serio->child); } - up(&serio_sem); + mutex_unlock(&serio_mutex); } /* * Submits register request to kseriod for subsequent execution. - * Can be used when it is not obvious whether the serio_sem is + * Can be used when it is not obvious whether the serio_mutex is * taken or not and when delayed execution is feasible. */ void __serio_unregister_port_delayed(struct serio *serio, struct module *owner) @@ -765,7 +767,7 @@ void serio_unregister_driver(struct serio_driver *drv) { struct serio *serio; - down(&serio_sem); + mutex_lock(&serio_mutex); drv->manual_bind = 1; /* so serio_find_driver ignores it */ start_over: @@ -779,7 +781,7 @@ void serio_unregister_driver(struct serio_driver *drv) } driver_unregister(&drv->driver); - up(&serio_sem); + mutex_unlock(&serio_mutex); } static void serio_set_drv(struct serio *serio, struct serio_driver *drv) @@ -858,7 +860,7 @@ static int serio_resume(struct device *dev) return 0; } -/* called from serio_driver->connect/disconnect methods under serio_sem */ +/* called from serio_driver->connect/disconnect methods under serio_mutex */ int serio_open(struct serio *serio, struct serio_driver *drv) { serio_set_drv(serio, drv); @@ -870,7 +872,7 @@ int serio_open(struct serio *serio, struct serio_driver *drv) return 0; } -/* called from serio_driver->connect/disconnect methods under serio_sem */ +/* called from serio_driver->connect/disconnect methods under serio_mutex */ void serio_close(struct serio *serio) { if (serio->close) @@ -923,5 +925,5 @@ static void __exit serio_exit(void) kthread_stop(serio_task); } -module_init(serio_init); +subsys_initcall(serio_init); module_exit(serio_exit); diff --git a/trunk/drivers/input/serio/serio_raw.c b/trunk/drivers/input/serio/serio_raw.c index 47e08de18d07..5a2703b536dc 100644 --- a/trunk/drivers/input/serio/serio_raw.c +++ b/trunk/drivers/input/serio/serio_raw.c @@ -19,6 +19,7 @@ #include #include #include +#include #define DRIVER_DESC "Raw serio driver" @@ -46,7 +47,7 @@ struct serio_raw_list { struct list_head node; }; -static DECLARE_MUTEX(serio_raw_sem); +static DEFINE_MUTEX(serio_raw_mutex); static LIST_HEAD(serio_raw_list); static unsigned int serio_raw_no; @@ -81,7 +82,7 @@ static int serio_raw_open(struct inode *inode, struct file *file) struct serio_raw_list *list; int retval = 0; - retval = down_interruptible(&serio_raw_sem); + retval = mutex_lock_interruptible(&serio_raw_mutex); if (retval) return retval; @@ -95,12 +96,11 @@ static int serio_raw_open(struct inode *inode, struct file *file) goto out; } - if (!(list = kmalloc(sizeof(struct serio_raw_list), GFP_KERNEL))) { + if (!(list = kzalloc(sizeof(struct serio_raw_list), GFP_KERNEL))) { retval = -ENOMEM; goto out; } - memset(list, 0, sizeof(struct serio_raw_list)); list->serio_raw = serio_raw; file->private_data = list; @@ -108,7 +108,7 @@ static int serio_raw_open(struct inode *inode, struct file *file) list_add_tail(&list->node, &serio_raw->list); out: - up(&serio_raw_sem); + mutex_unlock(&serio_raw_mutex); return retval; } @@ -130,12 +130,12 @@ static int serio_raw_release(struct inode *inode, struct file *file) struct serio_raw_list *list = file->private_data; struct serio_raw *serio_raw = list->serio_raw; - down(&serio_raw_sem); + mutex_lock(&serio_raw_mutex); serio_raw_fasync(-1, file, 0); serio_raw_cleanup(serio_raw); - up(&serio_raw_sem); + mutex_unlock(&serio_raw_mutex); return 0; } @@ -194,7 +194,7 @@ static ssize_t serio_raw_write(struct file *file, const char __user *buffer, siz int retval; unsigned char c; - retval = down_interruptible(&serio_raw_sem); + retval = mutex_lock_interruptible(&serio_raw_mutex); if (retval) return retval; @@ -219,7 +219,7 @@ static ssize_t serio_raw_write(struct file *file, const char __user *buffer, siz }; out: - up(&serio_raw_sem); + mutex_unlock(&serio_raw_mutex); return written; } @@ -275,14 +275,13 @@ static int serio_raw_connect(struct serio *serio, struct serio_driver *drv) struct serio_raw *serio_raw; int err; - if (!(serio_raw = kmalloc(sizeof(struct serio_raw), GFP_KERNEL))) { + if (!(serio_raw = kzalloc(sizeof(struct serio_raw), GFP_KERNEL))) { printk(KERN_ERR "serio_raw.c: can't allocate memory for a device\n"); return -ENOMEM; } - down(&serio_raw_sem); + mutex_lock(&serio_raw_mutex); - memset(serio_raw, 0, sizeof(struct serio_raw)); snprintf(serio_raw->name, sizeof(serio_raw->name), "serio_raw%d", serio_raw_no++); serio_raw->refcnt = 1; serio_raw->serio = serio; @@ -325,7 +324,7 @@ static int serio_raw_connect(struct serio *serio, struct serio_driver *drv) serio_set_drvdata(serio, NULL); kfree(serio_raw); out: - up(&serio_raw_sem); + mutex_unlock(&serio_raw_mutex); return err; } @@ -350,7 +349,7 @@ static void serio_raw_disconnect(struct serio *serio) { struct serio_raw *serio_raw; - down(&serio_raw_sem); + mutex_lock(&serio_raw_mutex); serio_raw = serio_get_drvdata(serio); @@ -361,7 +360,7 @@ static void serio_raw_disconnect(struct serio *serio) if (!serio_raw_cleanup(serio_raw)) wake_up_interruptible(&serio_raw->wait); - up(&serio_raw_sem); + mutex_unlock(&serio_raw_mutex); } static struct serio_device_id serio_raw_serio_ids[] = { diff --git a/trunk/drivers/input/tsdev.c b/trunk/drivers/input/tsdev.c index ca1547929d62..d678d144bbf8 100644 --- a/trunk/drivers/input/tsdev.c +++ b/trunk/drivers/input/tsdev.c @@ -157,9 +157,8 @@ static int tsdev_open(struct inode *inode, struct file *file) if (i >= TSDEV_MINORS || !tsdev_table[i & TSDEV_MINOR_MASK]) return -ENODEV; - if (!(list = kmalloc(sizeof(struct tsdev_list), GFP_KERNEL))) + if (!(list = kzalloc(sizeof(struct tsdev_list), GFP_KERNEL))) return -ENOMEM; - memset(list, 0, sizeof(struct tsdev_list)); list->raw = (i >= TSDEV_MINORS/2) ? 1 : 0; @@ -379,9 +378,8 @@ static struct input_handle *tsdev_connect(struct input_handler *handler, return NULL; } - if (!(tsdev = kmalloc(sizeof(struct tsdev), GFP_KERNEL))) + if (!(tsdev = kzalloc(sizeof(struct tsdev), GFP_KERNEL))) return NULL; - memset(tsdev, 0, sizeof(struct tsdev)); INIT_LIST_HEAD(&tsdev->list); init_waitqueue_head(&tsdev->wait); diff --git a/trunk/drivers/isdn/sc/ioctl.c b/trunk/drivers/isdn/sc/ioctl.c index 94c9afb7017c..f4f71226a078 100644 --- a/trunk/drivers/isdn/sc/ioctl.c +++ b/trunk/drivers/isdn/sc/ioctl.c @@ -46,7 +46,8 @@ int sc_ioctl(int card, scs_ioctl *data) pr_debug("%s: SCIOCRESET: ioctl received\n", sc_adapter[card]->devicename); sc_adapter[card]->StartOnReset = 0; - return (reset(card)); + kfree(rcvmsg); + return reset(card); } case SCIOCLOAD: @@ -183,7 +184,7 @@ int sc_ioctl(int card, scs_ioctl *data) sc_adapter[card]->devicename); spid = kmalloc(SCIOC_SPIDSIZE, GFP_KERNEL); - if(!spid) { + if (!spid) { kfree(rcvmsg); return -ENOMEM; } @@ -195,10 +196,10 @@ int sc_ioctl(int card, scs_ioctl *data) if (!status) { pr_debug("%s: SCIOCGETSPID: command successful\n", sc_adapter[card]->devicename); - } - else { + } else { pr_debug("%s: SCIOCGETSPID: command failed (status = %d)\n", sc_adapter[card]->devicename, status); + kfree(spid); kfree(rcvmsg); return status; } diff --git a/trunk/drivers/leds/Kconfig b/trunk/drivers/leds/Kconfig new file mode 100644 index 000000000000..2c4f20b7f021 --- /dev/null +++ b/trunk/drivers/leds/Kconfig @@ -0,0 +1,77 @@ + +menu "LED devices" + +config NEW_LEDS + bool "LED Support" + help + Say Y to enable Linux LED support. This is not related to standard + keyboard LEDs which are controlled via the input system. + +config LEDS_CLASS + tristate "LED Class Support" + depends NEW_LEDS + help + This option enables the led sysfs class in /sys/class/leds. You'll + need this to do anything useful with LEDs. If unsure, say N. + +config LEDS_TRIGGERS + bool "LED Trigger support" + depends NEW_LEDS + help + This option enables trigger support for the leds class. + These triggers allow kernel events to drive the LEDs and can + be configured via sysfs. If unsure, say Y. + +config LEDS_CORGI + tristate "LED Support for the Sharp SL-C7x0 series" + depends LEDS_CLASS && PXA_SHARP_C7xx + help + This option enables support for the LEDs on Sharp Zaurus + SL-C7x0 series (C700, C750, C760, C860). + +config LEDS_LOCOMO + tristate "LED Support for Locomo device" + depends LEDS_CLASS && SHARP_LOCOMO + help + This option enables support for the LEDs on Sharp Locomo. + Zaurus models SL-5500 and SL-5600. + +config LEDS_SPITZ + tristate "LED Support for the Sharp SL-Cxx00 series" + depends LEDS_CLASS && PXA_SHARP_Cxx00 + help + This option enables support for the LEDs on Sharp Zaurus + SL-Cxx00 series (C1000, C3000, C3100). + +config LEDS_IXP4XX + tristate "LED Support for GPIO connected LEDs on IXP4XX processors" + depends LEDS_CLASS && ARCH_IXP4XX + help + This option enables support for the LEDs connected to GPIO + outputs of the Intel IXP4XX processors. To be useful the + particular board must have LEDs and they must be connected + to the GPIO lines. If unsure, say Y. + +config LEDS_TOSA + tristate "LED Support for the Sharp SL-6000 series" + depends LEDS_CLASS && PXA_SHARPSL + help + This option enables support for the LEDs on Sharp Zaurus + SL-6000 series. + +config LEDS_TRIGGER_TIMER + tristate "LED Timer Trigger" + depends LEDS_TRIGGERS + help + This allows LEDs to be controlled by a programmable timer + via sysfs. If unsure, say Y. + +config LEDS_TRIGGER_IDE_DISK + bool "LED Timer Trigger" + depends LEDS_TRIGGERS && BLK_DEV_IDEDISK + help + This allows LEDs to be controlled by IDE disk activity. + If unsure, say Y. + +endmenu + diff --git a/trunk/drivers/leds/Makefile b/trunk/drivers/leds/Makefile new file mode 100644 index 000000000000..40699d3cabbf --- /dev/null +++ b/trunk/drivers/leds/Makefile @@ -0,0 +1,16 @@ + +# LED Core +obj-$(CONFIG_NEW_LEDS) += led-core.o +obj-$(CONFIG_LEDS_CLASS) += led-class.o +obj-$(CONFIG_LEDS_TRIGGERS) += led-triggers.o + +# LED Platform Drivers +obj-$(CONFIG_LEDS_CORGI) += leds-corgi.o +obj-$(CONFIG_LEDS_LOCOMO) += leds-locomo.o +obj-$(CONFIG_LEDS_SPITZ) += leds-spitz.o +obj-$(CONFIG_LEDS_IXP4XX) += leds-ixp4xx-gpio.o +obj-$(CONFIG_LEDS_TOSA) += leds-tosa.o + +# LED Triggers +obj-$(CONFIG_LEDS_TRIGGER_TIMER) += ledtrig-timer.o +obj-$(CONFIG_LEDS_TRIGGER_IDE_DISK) += ledtrig-ide-disk.o diff --git a/trunk/drivers/leds/led-class.c b/trunk/drivers/leds/led-class.c new file mode 100644 index 000000000000..b0b5d05fadd6 --- /dev/null +++ b/trunk/drivers/leds/led-class.c @@ -0,0 +1,167 @@ +/* + * LED Class Core + * + * Copyright (C) 2005 John Lenz + * Copyright (C) 2005-2006 Richard Purdie + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "leds.h" + +static struct class *leds_class; + +static ssize_t led_brightness_show(struct class_device *dev, char *buf) +{ + struct led_classdev *led_cdev = class_get_devdata(dev); + ssize_t ret = 0; + + /* no lock needed for this */ + sprintf(buf, "%u\n", led_cdev->brightness); + ret = strlen(buf) + 1; + + return ret; +} + +static ssize_t led_brightness_store(struct class_device *dev, + const char *buf, size_t size) +{ + struct led_classdev *led_cdev = class_get_devdata(dev); + ssize_t ret = -EINVAL; + char *after; + unsigned long state = simple_strtoul(buf, &after, 10); + + if (after - buf > 0) { + ret = after - buf; + led_set_brightness(led_cdev, state); + } + + return ret; +} + +static CLASS_DEVICE_ATTR(brightness, 0644, led_brightness_show, + led_brightness_store); +#ifdef CONFIG_LEDS_TRIGGERS +static CLASS_DEVICE_ATTR(trigger, 0644, led_trigger_show, led_trigger_store); +#endif + +/** + * led_classdev_suspend - suspend an led_classdev. + * @led_cdev: the led_classdev to suspend. + */ +void led_classdev_suspend(struct led_classdev *led_cdev) +{ + led_cdev->flags |= LED_SUSPENDED; + led_cdev->brightness_set(led_cdev, 0); +} +EXPORT_SYMBOL_GPL(led_classdev_suspend); + +/** + * led_classdev_resume - resume an led_classdev. + * @led_cdev: the led_classdev to resume. + */ +void led_classdev_resume(struct led_classdev *led_cdev) +{ + led_cdev->brightness_set(led_cdev, led_cdev->brightness); + led_cdev->flags &= ~LED_SUSPENDED; +} +EXPORT_SYMBOL_GPL(led_classdev_resume); + +/** + * led_classdev_register - register a new object of led_classdev class. + * @dev: The device to register. + * @led_cdev: the led_classdev structure for this device. + */ +int led_classdev_register(struct device *parent, struct led_classdev *led_cdev) +{ + led_cdev->class_dev = class_device_create(leds_class, NULL, 0, + parent, "%s", led_cdev->name); + if (unlikely(IS_ERR(led_cdev->class_dev))) + return PTR_ERR(led_cdev->class_dev); + + class_set_devdata(led_cdev->class_dev, led_cdev); + + /* register the attributes */ + class_device_create_file(led_cdev->class_dev, + &class_device_attr_brightness); + + /* add to the list of leds */ + write_lock(&leds_list_lock); + list_add_tail(&led_cdev->node, &leds_list); + write_unlock(&leds_list_lock); + +#ifdef CONFIG_LEDS_TRIGGERS + rwlock_init(&led_cdev->trigger_lock); + + led_trigger_set_default(led_cdev); + + class_device_create_file(led_cdev->class_dev, + &class_device_attr_trigger); +#endif + + printk(KERN_INFO "Registered led device: %s\n", + led_cdev->class_dev->class_id); + + return 0; +} +EXPORT_SYMBOL_GPL(led_classdev_register); + +/** + * led_classdev_unregister - unregisters a object of led_properties class. + * @led_cdev: the led device to unreigister + * + * Unregisters a previously registered via led_classdev_register object. + */ +void led_classdev_unregister(struct led_classdev *led_cdev) +{ + class_device_remove_file(led_cdev->class_dev, + &class_device_attr_brightness); +#ifdef CONFIG_LEDS_TRIGGERS + class_device_remove_file(led_cdev->class_dev, + &class_device_attr_trigger); + write_lock(&led_cdev->trigger_lock); + if (led_cdev->trigger) + led_trigger_set(led_cdev, NULL); + write_unlock(&led_cdev->trigger_lock); +#endif + + class_device_unregister(led_cdev->class_dev); + + write_lock(&leds_list_lock); + list_del(&led_cdev->node); + write_unlock(&leds_list_lock); +} +EXPORT_SYMBOL_GPL(led_classdev_unregister); + +static int __init leds_init(void) +{ + leds_class = class_create(THIS_MODULE, "leds"); + if (IS_ERR(leds_class)) + return PTR_ERR(leds_class); + return 0; +} + +static void __exit leds_exit(void) +{ + class_destroy(leds_class); +} + +subsys_initcall(leds_init); +module_exit(leds_exit); + +MODULE_AUTHOR("John Lenz, Richard Purdie"); +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("LED Class Interface"); diff --git a/trunk/drivers/leds/led-core.c b/trunk/drivers/leds/led-core.c new file mode 100644 index 000000000000..fe6541326c71 --- /dev/null +++ b/trunk/drivers/leds/led-core.c @@ -0,0 +1,25 @@ +/* + * LED Class Core + * + * Copyright 2005-2006 Openedhand Ltd. + * + * Author: Richard Purdie + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ + +#include +#include +#include +#include +#include +#include "leds.h" + +rwlock_t leds_list_lock = RW_LOCK_UNLOCKED; +LIST_HEAD(leds_list); + +EXPORT_SYMBOL_GPL(leds_list); +EXPORT_SYMBOL_GPL(leds_list_lock); diff --git a/trunk/drivers/leds/led-triggers.c b/trunk/drivers/leds/led-triggers.c new file mode 100644 index 000000000000..5e2cd8be1191 --- /dev/null +++ b/trunk/drivers/leds/led-triggers.c @@ -0,0 +1,239 @@ +/* + * LED Triggers Core + * + * Copyright 2005-2006 Openedhand Ltd. + * + * Author: Richard Purdie + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "leds.h" + +/* + * Nests outside led_cdev->trigger_lock + */ +static rwlock_t triggers_list_lock = RW_LOCK_UNLOCKED; +static LIST_HEAD(trigger_list); + +ssize_t led_trigger_store(struct class_device *dev, const char *buf, + size_t count) +{ + struct led_classdev *led_cdev = class_get_devdata(dev); + char trigger_name[TRIG_NAME_MAX]; + struct led_trigger *trig; + size_t len; + + trigger_name[sizeof(trigger_name) - 1] = '\0'; + strncpy(trigger_name, buf, sizeof(trigger_name) - 1); + len = strlen(trigger_name); + + if (len && trigger_name[len - 1] == '\n') + trigger_name[len - 1] = '\0'; + + if (!strcmp(trigger_name, "none")) { + write_lock(&led_cdev->trigger_lock); + led_trigger_set(led_cdev, NULL); + write_unlock(&led_cdev->trigger_lock); + return count; + } + + read_lock(&triggers_list_lock); + list_for_each_entry(trig, &trigger_list, next_trig) { + if (!strcmp(trigger_name, trig->name)) { + write_lock(&led_cdev->trigger_lock); + led_trigger_set(led_cdev, trig); + write_unlock(&led_cdev->trigger_lock); + + read_unlock(&triggers_list_lock); + return count; + } + } + read_unlock(&triggers_list_lock); + + return -EINVAL; +} + + +ssize_t led_trigger_show(struct class_device *dev, char *buf) +{ + struct led_classdev *led_cdev = class_get_devdata(dev); + struct led_trigger *trig; + int len = 0; + + read_lock(&triggers_list_lock); + read_lock(&led_cdev->trigger_lock); + + if (!led_cdev->trigger) + len += sprintf(buf+len, "[none] "); + else + len += sprintf(buf+len, "none "); + + list_for_each_entry(trig, &trigger_list, next_trig) { + if (led_cdev->trigger && !strcmp(led_cdev->trigger->name, + trig->name)) + len += sprintf(buf+len, "[%s] ", trig->name); + else + len += sprintf(buf+len, "%s ", trig->name); + } + read_unlock(&led_cdev->trigger_lock); + read_unlock(&triggers_list_lock); + + len += sprintf(len+buf, "\n"); + return len; +} + +void led_trigger_event(struct led_trigger *trigger, + enum led_brightness brightness) +{ + struct list_head *entry; + + if (!trigger) + return; + + read_lock(&trigger->leddev_list_lock); + list_for_each(entry, &trigger->led_cdevs) { + struct led_classdev *led_cdev; + + led_cdev = list_entry(entry, struct led_classdev, trig_list); + led_set_brightness(led_cdev, brightness); + } + read_unlock(&trigger->leddev_list_lock); +} + +/* Caller must ensure led_cdev->trigger_lock held */ +void led_trigger_set(struct led_classdev *led_cdev, struct led_trigger *trigger) +{ + unsigned long flags; + + /* Remove any existing trigger */ + if (led_cdev->trigger) { + write_lock_irqsave(&led_cdev->trigger->leddev_list_lock, flags); + list_del(&led_cdev->trig_list); + write_unlock_irqrestore(&led_cdev->trigger->leddev_list_lock, flags); + if (led_cdev->trigger->deactivate) + led_cdev->trigger->deactivate(led_cdev); + } + if (trigger) { + write_lock_irqsave(&trigger->leddev_list_lock, flags); + list_add_tail(&led_cdev->trig_list, &trigger->led_cdevs); + write_unlock_irqrestore(&trigger->leddev_list_lock, flags); + if (trigger->activate) + trigger->activate(led_cdev); + } + led_cdev->trigger = trigger; +} + +void led_trigger_set_default(struct led_classdev *led_cdev) +{ + struct led_trigger *trig; + + if (!led_cdev->default_trigger) + return; + + read_lock(&triggers_list_lock); + write_lock(&led_cdev->trigger_lock); + list_for_each_entry(trig, &trigger_list, next_trig) { + if (!strcmp(led_cdev->default_trigger, trig->name)) + led_trigger_set(led_cdev, trig); + } + write_unlock(&led_cdev->trigger_lock); + read_unlock(&triggers_list_lock); +} + +int led_trigger_register(struct led_trigger *trigger) +{ + struct led_classdev *led_cdev; + + rwlock_init(&trigger->leddev_list_lock); + INIT_LIST_HEAD(&trigger->led_cdevs); + + /* Add to the list of led triggers */ + write_lock(&triggers_list_lock); + list_add_tail(&trigger->next_trig, &trigger_list); + write_unlock(&triggers_list_lock); + + /* Register with any LEDs that have this as a default trigger */ + read_lock(&leds_list_lock); + list_for_each_entry(led_cdev, &leds_list, node) { + write_lock(&led_cdev->trigger_lock); + if (!led_cdev->trigger && led_cdev->default_trigger && + !strcmp(led_cdev->default_trigger, trigger->name)) + led_trigger_set(led_cdev, trigger); + write_unlock(&led_cdev->trigger_lock); + } + read_unlock(&leds_list_lock); + + return 0; +} + +void led_trigger_register_simple(const char *name, struct led_trigger **tp) +{ + struct led_trigger *trigger; + + trigger = kzalloc(sizeof(struct led_trigger), GFP_KERNEL); + + if (trigger) { + trigger->name = name; + led_trigger_register(trigger); + } + *tp = trigger; +} + +void led_trigger_unregister(struct led_trigger *trigger) +{ + struct led_classdev *led_cdev; + + /* Remove from the list of led triggers */ + write_lock(&triggers_list_lock); + list_del(&trigger->next_trig); + write_unlock(&triggers_list_lock); + + /* Remove anyone actively using this trigger */ + read_lock(&leds_list_lock); + list_for_each_entry(led_cdev, &leds_list, node) { + write_lock(&led_cdev->trigger_lock); + if (led_cdev->trigger == trigger) + led_trigger_set(led_cdev, NULL); + write_unlock(&led_cdev->trigger_lock); + } + read_unlock(&leds_list_lock); +} + +void led_trigger_unregister_simple(struct led_trigger *trigger) +{ + led_trigger_unregister(trigger); + kfree(trigger); +} + +/* Used by LED Class */ +EXPORT_SYMBOL_GPL(led_trigger_set); +EXPORT_SYMBOL_GPL(led_trigger_set_default); +EXPORT_SYMBOL_GPL(led_trigger_show); +EXPORT_SYMBOL_GPL(led_trigger_store); + +/* LED Trigger Interface */ +EXPORT_SYMBOL_GPL(led_trigger_register); +EXPORT_SYMBOL_GPL(led_trigger_unregister); + +/* Simple LED Tigger Interface */ +EXPORT_SYMBOL_GPL(led_trigger_register_simple); +EXPORT_SYMBOL_GPL(led_trigger_unregister_simple); +EXPORT_SYMBOL_GPL(led_trigger_event); + +MODULE_AUTHOR("Richard Purdie"); +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("LED Triggers Core"); diff --git a/trunk/drivers/leds/leds-corgi.c b/trunk/drivers/leds/leds-corgi.c new file mode 100644 index 000000000000..bb7d84df0121 --- /dev/null +++ b/trunk/drivers/leds/leds-corgi.c @@ -0,0 +1,121 @@ +/* + * LED Triggers Core + * + * Copyright 2005-2006 Openedhand Ltd. + * + * Author: Richard Purdie + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +static void corgiled_amber_set(struct led_classdev *led_cdev, enum led_brightness value) +{ + if (value) + GPSR0 = GPIO_bit(CORGI_GPIO_LED_ORANGE); + else + GPCR0 = GPIO_bit(CORGI_GPIO_LED_ORANGE); +} + +static void corgiled_green_set(struct led_classdev *led_cdev, enum led_brightness value) +{ + if (value) + set_scoop_gpio(&corgiscoop_device.dev, CORGI_SCP_LED_GREEN); + else + reset_scoop_gpio(&corgiscoop_device.dev, CORGI_SCP_LED_GREEN); +} + +static struct led_classdev corgi_amber_led = { + .name = "corgi:amber", + .default_trigger = "sharpsl-charge", + .brightness_set = corgiled_amber_set, +}; + +static struct led_classdev corgi_green_led = { + .name = "corgi:green", + .default_trigger = "nand-disk", + .brightness_set = corgiled_green_set, +}; + +#ifdef CONFIG_PM +static int corgiled_suspend(struct platform_device *dev, pm_message_t state) +{ +#ifdef CONFIG_LEDS_TRIGGERS + if (corgi_amber_led.trigger && strcmp(corgi_amber_led.trigger->name, "sharpsl-charge")) +#endif + led_classdev_suspend(&corgi_amber_led); + led_classdev_suspend(&corgi_green_led); + return 0; +} + +static int corgiled_resume(struct platform_device *dev) +{ + led_classdev_resume(&corgi_amber_led); + led_classdev_resume(&corgi_green_led); + return 0; +} +#endif + +static int corgiled_probe(struct platform_device *pdev) +{ + int ret; + + ret = led_classdev_register(&pdev->dev, &corgi_amber_led); + if (ret < 0) + return ret; + + ret = led_classdev_register(&pdev->dev, &corgi_green_led); + if (ret < 0) + led_classdev_unregister(&corgi_amber_led); + + return ret; +} + +static int corgiled_remove(struct platform_device *pdev) +{ + led_classdev_unregister(&corgi_amber_led); + led_classdev_unregister(&corgi_green_led); + return 0; +} + +static struct platform_driver corgiled_driver = { + .probe = corgiled_probe, + .remove = corgiled_remove, +#ifdef CONFIG_PM + .suspend = corgiled_suspend, + .resume = corgiled_resume, +#endif + .driver = { + .name = "corgi-led", + }, +}; + +static int __init corgiled_init(void) +{ + return platform_driver_register(&corgiled_driver); +} + +static void __exit corgiled_exit(void) +{ + platform_driver_unregister(&corgiled_driver); +} + +module_init(corgiled_init); +module_exit(corgiled_exit); + +MODULE_AUTHOR("Richard Purdie "); +MODULE_DESCRIPTION("Corgi LED driver"); +MODULE_LICENSE("GPL"); diff --git a/trunk/drivers/leds/leds-ixp4xx-gpio.c b/trunk/drivers/leds/leds-ixp4xx-gpio.c new file mode 100644 index 000000000000..30ced150e4cf --- /dev/null +++ b/trunk/drivers/leds/leds-ixp4xx-gpio.c @@ -0,0 +1,215 @@ +/* + * IXP4XX GPIO driver LED driver + * + * Author: John Bowler + * + * Copyright (c) 2006 John Bowler + * + * Permission is hereby granted, free of charge, to any + * person obtaining a copy of this software and associated + * documentation files (the "Software"), to deal in the + * Software without restriction, including without + * limitation the rights to use, copy, modify, merge, + * publish, distribute, sublicense, and/or sell copies of + * the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the + * following conditions: + * + * The above copyright notice and this permission notice + * shall be included in all copies or substantial portions + * of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF + * ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED + * TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A + * PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT + * SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR + * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN + * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + */ + +#include +#include +#include +#include +#include +#include +#include + +extern spinlock_t gpio_lock; + +/* Up to 16 gpio lines are possible. */ +#define GPIO_MAX 16 +static struct ixp4xxgpioled_device { + struct led_classdev ancestor; + int flags; +} ixp4xxgpioled_devices[GPIO_MAX]; + +void ixp4xxgpioled_brightness_set(struct led_classdev *pled, + enum led_brightness value) +{ + const struct ixp4xxgpioled_device *const ixp4xx_dev = + container_of(pled, struct ixp4xxgpioled_device, ancestor); + const u32 gpio_pin = ixp4xx_dev - ixp4xxgpioled_devices; + + if (gpio_pin < GPIO_MAX && ixp4xx_dev->ancestor.name != 0) { + /* Set or clear the 'gpio_pin' bit according to the style + * and the required setting (value > 0 == on) + */ + const int gpio_value = + (value > 0) == (ixp4xx_dev->flags != IXP4XX_GPIO_LOW) ? + IXP4XX_GPIO_HIGH : IXP4XX_GPIO_LOW; + + { + unsigned long flags; + spin_lock_irqsave(&gpio_lock, flags); + gpio_line_set(gpio_pin, gpio_value); + spin_unlock_irqrestore(&gpio_lock, flags); + } + } +} + +/* LEDs are described in resources, the following iterates over the valid + * LED resources. + */ +#define for_all_leds(i, pdev) \ + for (i=0; inum_resources; ++i) \ + if (pdev->resource[i].start < GPIO_MAX && \ + pdev->resource[i].name != 0) + +/* The following applies 'operation' to each LED from the given platform, + * the function always returns 0 to allow tail call elimination. + */ +static int apply_to_all_leds(struct platform_device *pdev, + void (*operation)(struct led_classdev *pled)) +{ + int i; + + for_all_leds(i, pdev) + operation(&ixp4xxgpioled_devices[pdev->resource[i].start].ancestor); + return 0; +} + +#ifdef CONFIG_PM +static int ixp4xxgpioled_suspend(struct platform_device *pdev, + pm_message_t state) +{ + return apply_to_all_leds(pdev, led_classdev_suspend); +} + +static int ixp4xxgpioled_resume(struct platform_device *pdev) +{ + return apply_to_all_leds(pdev, led_classdev_resume); +} +#endif + +static void ixp4xxgpioled_remove_one_led(struct led_classdev *pled) +{ + led_classdev_unregister(pled); + pled->name = 0; +} + +static int ixp4xxgpioled_remove(struct platform_device *pdev) +{ + return apply_to_all_leds(pdev, ixp4xxgpioled_remove_one_led); +} + +static int ixp4xxgpioled_probe(struct platform_device *pdev) +{ + /* The board level has to tell the driver where the + * LEDs are connected - there is no way to find out + * electrically. It must also say whether the GPIO + * lines are active high or active low. + * + * To do this read the num_resources (the number of + * LEDs) and the struct resource (the data for each + * LED). The name comes from the resource, and it + * isn't copied. + */ + int i; + + for_all_leds(i, pdev) { + const u8 gpio_pin = pdev->resource[i].start; + int rc; + + if (ixp4xxgpioled_devices[gpio_pin].ancestor.name == 0) { + unsigned long flags; + + spin_lock_irqsave(&gpio_lock, flags); + gpio_line_config(gpio_pin, IXP4XX_GPIO_OUT); + /* The config can, apparently, reset the state, + * I suspect the gpio line may be an input and + * the config may cause the line to be latched, + * so the setting depends on how the LED is + * connected to the line (which affects how it + * floats if not driven). + */ + gpio_line_set(gpio_pin, IXP4XX_GPIO_HIGH); + spin_unlock_irqrestore(&gpio_lock, flags); + + ixp4xxgpioled_devices[gpio_pin].flags = + pdev->resource[i].flags & IORESOURCE_BITS; + + ixp4xxgpioled_devices[gpio_pin].ancestor.name = + pdev->resource[i].name; + + /* This is how a board manufacturer makes the LED + * come on on reset - the GPIO line will be high, so + * make the LED light when the line is low... + */ + if (ixp4xxgpioled_devices[gpio_pin].flags != IXP4XX_GPIO_LOW) + ixp4xxgpioled_devices[gpio_pin].ancestor.brightness = 100; + else + ixp4xxgpioled_devices[gpio_pin].ancestor.brightness = 0; + + ixp4xxgpioled_devices[gpio_pin].ancestor.flags = 0; + + ixp4xxgpioled_devices[gpio_pin].ancestor.brightness_set = + ixp4xxgpioled_brightness_set; + + ixp4xxgpioled_devices[gpio_pin].ancestor.default_trigger = 0; + } + + rc = led_classdev_register(&pdev->dev, + &ixp4xxgpioled_devices[gpio_pin].ancestor); + if (rc < 0) { + ixp4xxgpioled_devices[gpio_pin].ancestor.name = 0; + ixp4xxgpioled_remove(pdev); + return rc; + } + } + + return 0; +} + +static struct platform_driver ixp4xxgpioled_driver = { + .probe = ixp4xxgpioled_probe, + .remove = ixp4xxgpioled_remove, +#ifdef CONFIG_PM + .suspend = ixp4xxgpioled_suspend, + .resume = ixp4xxgpioled_resume, +#endif + .driver = { + .name = "IXP4XX-GPIO-LED", + }, +}; + +static int __init ixp4xxgpioled_init(void) +{ + return platform_driver_register(&ixp4xxgpioled_driver); +} + +static void __exit ixp4xxgpioled_exit(void) +{ + platform_driver_unregister(&ixp4xxgpioled_driver); +} + +module_init(ixp4xxgpioled_init); +module_exit(ixp4xxgpioled_exit); + +MODULE_AUTHOR("John Bowler "); +MODULE_DESCRIPTION("IXP4XX GPIO LED driver"); +MODULE_LICENSE("Dual MIT/GPL"); diff --git a/trunk/drivers/leds/leds-locomo.c b/trunk/drivers/leds/leds-locomo.c new file mode 100644 index 000000000000..749a86c2adb6 --- /dev/null +++ b/trunk/drivers/leds/leds-locomo.c @@ -0,0 +1,95 @@ +/* + * linux/drivers/leds/locomo.c + * + * Copyright (C) 2005 John Lenz + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include +#include +#include +#include +#include + +#include +#include + +static void locomoled_brightness_set(struct led_classdev *led_cdev, + enum led_brightness value, int offset) +{ + struct locomo_dev *locomo_dev = LOCOMO_DEV(led_cdev->class_dev->dev); + unsigned long flags; + + local_irq_save(flags); + if (value) + locomo_writel(LOCOMO_LPT_TOFH, locomo_dev->mapbase + offset); + else + locomo_writel(LOCOMO_LPT_TOFL, locomo_dev->mapbase + offset); + local_irq_restore(flags); +} + +static void locomoled_brightness_set0(struct led_classdev *led_cdev, + enum led_brightness value) +{ + locomoled_brightness_set(led_cdev, value, LOCOMO_LPT0); +} + +static void locomoled_brightness_set1(struct led_classdev *led_cdev, + enum led_brightness value) +{ + locomoled_brightness_set(led_cdev, value, LOCOMO_LPT1); +} + +static struct led_classdev locomo_led0 = { + .name = "locomo:amber", + .brightness_set = locomoled_brightness_set0, +}; + +static struct led_classdev locomo_led1 = { + .name = "locomo:green", + .brightness_set = locomoled_brightness_set1, +}; + +static int locomoled_probe(struct locomo_dev *ldev) +{ + int ret; + + ret = led_classdev_register(&ldev->dev, &locomo_led0); + if (ret < 0) + return ret; + + ret = led_classdev_register(&ldev->dev, &locomo_led1); + if (ret < 0) + led_classdev_unregister(&locomo_led0); + + return ret; +} + +static int locomoled_remove(struct locomo_dev *dev) +{ + led_classdev_unregister(&locomo_led0); + led_classdev_unregister(&locomo_led1); + return 0; +} + +static struct locomo_driver locomoled_driver = { + .drv = { + .name = "locomoled" + }, + .devid = LOCOMO_DEVID_LED, + .probe = locomoled_probe, + .remove = locomoled_remove, +}; + +static int __init locomoled_init(void) +{ + return locomo_driver_register(&locomoled_driver); +} +module_init(locomoled_init); + +MODULE_AUTHOR("John Lenz "); +MODULE_DESCRIPTION("Locomo LED driver"); +MODULE_LICENSE("GPL"); diff --git a/trunk/drivers/leds/leds-spitz.c b/trunk/drivers/leds/leds-spitz.c new file mode 100644 index 000000000000..65bbef4a5e09 --- /dev/null +++ b/trunk/drivers/leds/leds-spitz.c @@ -0,0 +1,125 @@ +/* + * LED Triggers Core + * + * Copyright 2005-2006 Openedhand Ltd. + * + * Author: Richard Purdie + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +static void spitzled_amber_set(struct led_classdev *led_cdev, enum led_brightness value) +{ + if (value) + set_scoop_gpio(&spitzscoop_device.dev, SPITZ_SCP_LED_ORANGE); + else + reset_scoop_gpio(&spitzscoop_device.dev, SPITZ_SCP_LED_ORANGE); +} + +static void spitzled_green_set(struct led_classdev *led_cdev, enum led_brightness value) +{ + if (value) + set_scoop_gpio(&spitzscoop_device.dev, SPITZ_SCP_LED_GREEN); + else + reset_scoop_gpio(&spitzscoop_device.dev, SPITZ_SCP_LED_GREEN); +} + +static struct led_classdev spitz_amber_led = { + .name = "spitz:amber", + .default_trigger = "sharpsl-charge", + .brightness_set = spitzled_amber_set, +}; + +static struct led_classdev spitz_green_led = { + .name = "spitz:green", + .default_trigger = "ide-disk", + .brightness_set = spitzled_green_set, +}; + +#ifdef CONFIG_PM +static int spitzled_suspend(struct platform_device *dev, pm_message_t state) +{ +#ifdef CONFIG_LEDS_TRIGGERS + if (spitz_amber_led.trigger && strcmp(spitz_amber_led.trigger->name, "sharpsl-charge")) +#endif + led_classdev_suspend(&spitz_amber_led); + led_classdev_suspend(&spitz_green_led); + return 0; +} + +static int spitzled_resume(struct platform_device *dev) +{ + led_classdev_resume(&spitz_amber_led); + led_classdev_resume(&spitz_green_led); + return 0; +} +#endif + +static int spitzled_probe(struct platform_device *pdev) +{ + int ret; + + if (machine_is_akita()) + spitz_green_led.default_trigger = "nand-disk"; + + ret = led_classdev_register(&pdev->dev, &spitz_amber_led); + if (ret < 0) + return ret; + + ret = led_classdev_register(&pdev->dev, &spitz_green_led); + if (ret < 0) + led_classdev_unregister(&spitz_amber_led); + + return ret; +} + +static int spitzled_remove(struct platform_device *pdev) +{ + led_classdev_unregister(&spitz_amber_led); + led_classdev_unregister(&spitz_green_led); + + return 0; +} + +static struct platform_driver spitzled_driver = { + .probe = spitzled_probe, + .remove = spitzled_remove, +#ifdef CONFIG_PM + .suspend = spitzled_suspend, + .resume = spitzled_resume, +#endif + .driver = { + .name = "spitz-led", + }, +}; + +static int __init spitzled_init(void) +{ + return platform_driver_register(&spitzled_driver); +} + +static void __exit spitzled_exit(void) +{ + platform_driver_unregister(&spitzled_driver); +} + +module_init(spitzled_init); +module_exit(spitzled_exit); + +MODULE_AUTHOR("Richard Purdie "); +MODULE_DESCRIPTION("Spitz LED driver"); +MODULE_LICENSE("GPL"); diff --git a/trunk/drivers/leds/leds-tosa.c b/trunk/drivers/leds/leds-tosa.c new file mode 100644 index 000000000000..c9e8cc1ec481 --- /dev/null +++ b/trunk/drivers/leds/leds-tosa.c @@ -0,0 +1,131 @@ +/* + * LED Triggers Core + * + * Copyright 2005 Dirk Opfer + * + * Author: Dirk Opfer + * based on spitz.c + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +static void tosaled_amber_set(struct led_classdev *led_cdev, + enum led_brightness value) +{ + if (value) + set_scoop_gpio(&tosascoop_jc_device.dev, + TOSA_SCOOP_JC_CHRG_ERR_LED); + else + reset_scoop_gpio(&tosascoop_jc_device.dev, + TOSA_SCOOP_JC_CHRG_ERR_LED); +} + +static void tosaled_green_set(struct led_classdev *led_cdev, + enum led_brightness value) +{ + if (value) + set_scoop_gpio(&tosascoop_jc_device.dev, + TOSA_SCOOP_JC_NOTE_LED); + else + reset_scoop_gpio(&tosascoop_jc_device.dev, + TOSA_SCOOP_JC_NOTE_LED); +} + +static struct led_classdev tosa_amber_led = { + .name = "tosa:amber", + .default_trigger = "sharpsl-charge", + .brightness_set = tosaled_amber_set, +}; + +static struct led_classdev tosa_green_led = { + .name = "tosa:green", + .default_trigger = "nand-disk", + .brightness_set = tosaled_green_set, +}; + +#ifdef CONFIG_PM +static int tosaled_suspend(struct platform_device *dev, pm_message_t state) +{ +#ifdef CONFIG_LEDS_TRIGGERS + if (tosa_amber_led.trigger && strcmp(tosa_amber_led.trigger->name, + "sharpsl-charge")) +#endif + led_classdev_suspend(&tosa_amber_led); + led_classdev_suspend(&tosa_green_led); + return 0; +} + +static int tosaled_resume(struct platform_device *dev) +{ + led_classdev_resume(&tosa_amber_led); + led_classdev_resume(&tosa_green_led); + return 0; +} +#else +#define tosaled_suspend NULL +#define tosaled_resume NULL +#endif + +static int tosaled_probe(struct platform_device *pdev) +{ + int ret; + + ret = led_classdev_register(&pdev->dev, &tosa_amber_led); + if (ret < 0) + return ret; + + ret = led_classdev_register(&pdev->dev, &tosa_green_led); + if (ret < 0) + led_classdev_unregister(&tosa_amber_led); + + return ret; +} + +static int tosaled_remove(struct platform_device *pdev) +{ + led_classdev_unregister(&tosa_amber_led); + led_classdev_unregister(&tosa_green_led); + + return 0; +} + +static struct platform_driver tosaled_driver = { + .probe = tosaled_probe, + .remove = tosaled_remove, + .suspend = tosaled_suspend, + .resume = tosaled_resume, + .driver = { + .name = "tosa-led", + }, +}; + +static int __init tosaled_init(void) +{ + return platform_driver_register(&tosaled_driver); +} + +static void __exit tosaled_exit(void) +{ + platform_driver_unregister(&tosaled_driver); +} + +module_init(tosaled_init); +module_exit(tosaled_exit); + +MODULE_AUTHOR("Dirk Opfer "); +MODULE_DESCRIPTION("Tosa LED driver"); +MODULE_LICENSE("GPL"); diff --git a/trunk/drivers/leds/leds.h b/trunk/drivers/leds/leds.h new file mode 100644 index 000000000000..a715c4ed93ff --- /dev/null +++ b/trunk/drivers/leds/leds.h @@ -0,0 +1,44 @@ +/* + * LED Core + * + * Copyright 2005 Openedhand Ltd. + * + * Author: Richard Purdie + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ +#ifndef __LEDS_H_INCLUDED +#define __LEDS_H_INCLUDED + +#include + +static inline void led_set_brightness(struct led_classdev *led_cdev, + enum led_brightness value) +{ + if (value > LED_FULL) + value = LED_FULL; + led_cdev->brightness = value; + if (!(led_cdev->flags & LED_SUSPENDED)) + led_cdev->brightness_set(led_cdev, value); +} + +extern rwlock_t leds_list_lock; +extern struct list_head leds_list; + +#ifdef CONFIG_LEDS_TRIGGERS +void led_trigger_set_default(struct led_classdev *led_cdev); +void led_trigger_set(struct led_classdev *led_cdev, + struct led_trigger *trigger); +#else +#define led_trigger_set_default(x) do {} while(0) +#define led_trigger_set(x, y) do {} while(0) +#endif + +ssize_t led_trigger_store(struct class_device *dev, const char *buf, + size_t count); +ssize_t led_trigger_show(struct class_device *dev, char *buf); + +#endif /* __LEDS_H_INCLUDED */ diff --git a/trunk/drivers/leds/ledtrig-ide-disk.c b/trunk/drivers/leds/ledtrig-ide-disk.c new file mode 100644 index 000000000000..fa651886ab4f --- /dev/null +++ b/trunk/drivers/leds/ledtrig-ide-disk.c @@ -0,0 +1,62 @@ +/* + * LED IDE-Disk Activity Trigger + * + * Copyright 2006 Openedhand Ltd. + * + * Author: Richard Purdie + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ + +#include +#include +#include +#include +#include + +static void ledtrig_ide_timerfunc(unsigned long data); + +DEFINE_LED_TRIGGER(ledtrig_ide); +static DEFINE_TIMER(ledtrig_ide_timer, ledtrig_ide_timerfunc, 0, 0); +static int ide_activity; +static int ide_lastactivity; + +void ledtrig_ide_activity(void) +{ + ide_activity++; + if (!timer_pending(&ledtrig_ide_timer)) + mod_timer(&ledtrig_ide_timer, jiffies + msecs_to_jiffies(10)); +} +EXPORT_SYMBOL(ledtrig_ide_activity); + +static void ledtrig_ide_timerfunc(unsigned long data) +{ + if (ide_lastactivity != ide_activity) { + ide_lastactivity = ide_activity; + led_trigger_event(ledtrig_ide, LED_FULL); + mod_timer(&ledtrig_ide_timer, jiffies + msecs_to_jiffies(10)); + } else { + led_trigger_event(ledtrig_ide, LED_OFF); + } +} + +static int __init ledtrig_ide_init(void) +{ + led_trigger_register_simple("ide-disk", &ledtrig_ide); + return 0; +} + +static void __exit ledtrig_ide_exit(void) +{ + led_trigger_unregister_simple(ledtrig_ide); +} + +module_init(ledtrig_ide_init); +module_exit(ledtrig_ide_exit); + +MODULE_AUTHOR("Richard Purdie "); +MODULE_DESCRIPTION("LED IDE Disk Activity Trigger"); +MODULE_LICENSE("GPL"); diff --git a/trunk/drivers/leds/ledtrig-timer.c b/trunk/drivers/leds/ledtrig-timer.c new file mode 100644 index 000000000000..f484b5d6dbf8 --- /dev/null +++ b/trunk/drivers/leds/ledtrig-timer.c @@ -0,0 +1,170 @@ +/* + * LED Kernel Timer Trigger + * + * Copyright 2005-2006 Openedhand Ltd. + * + * Author: Richard Purdie + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "leds.h" + +struct timer_trig_data { + unsigned long delay_on; /* milliseconds on */ + unsigned long delay_off; /* milliseconds off */ + struct timer_list timer; +}; + +static void led_timer_function(unsigned long data) +{ + struct led_classdev *led_cdev = (struct led_classdev *) data; + struct timer_trig_data *timer_data = led_cdev->trigger_data; + unsigned long brightness = LED_OFF; + unsigned long delay = timer_data->delay_off; + + if (!timer_data->delay_on || !timer_data->delay_off) { + led_set_brightness(led_cdev, LED_OFF); + return; + } + + if (!led_cdev->brightness) { + brightness = LED_FULL; + delay = timer_data->delay_on; + } + + led_set_brightness(led_cdev, brightness); + + mod_timer(&timer_data->timer, jiffies + msecs_to_jiffies(delay)); +} + +static ssize_t led_delay_on_show(struct class_device *dev, char *buf) +{ + struct led_classdev *led_cdev = class_get_devdata(dev); + struct timer_trig_data *timer_data = led_cdev->trigger_data; + + sprintf(buf, "%lu\n", timer_data->delay_on); + + return strlen(buf) + 1; +} + +static ssize_t led_delay_on_store(struct class_device *dev, const char *buf, + size_t size) +{ + struct led_classdev *led_cdev = class_get_devdata(dev); + struct timer_trig_data *timer_data = led_cdev->trigger_data; + int ret = -EINVAL; + char *after; + unsigned long state = simple_strtoul(buf, &after, 10); + + if (after - buf > 0) { + timer_data->delay_on = state; + mod_timer(&timer_data->timer, jiffies + 1); + ret = after - buf; + } + + return ret; +} + +static ssize_t led_delay_off_show(struct class_device *dev, char *buf) +{ + struct led_classdev *led_cdev = class_get_devdata(dev); + struct timer_trig_data *timer_data = led_cdev->trigger_data; + + sprintf(buf, "%lu\n", timer_data->delay_off); + + return strlen(buf) + 1; +} + +static ssize_t led_delay_off_store(struct class_device *dev, const char *buf, + size_t size) +{ + struct led_classdev *led_cdev = class_get_devdata(dev); + struct timer_trig_data *timer_data = led_cdev->trigger_data; + int ret = -EINVAL; + char *after; + unsigned long state = simple_strtoul(buf, &after, 10); + + if (after - buf > 0) { + timer_data->delay_off = state; + mod_timer(&timer_data->timer, jiffies + 1); + ret = after - buf; + } + + return ret; +} + +static CLASS_DEVICE_ATTR(delay_on, 0644, led_delay_on_show, + led_delay_on_store); +static CLASS_DEVICE_ATTR(delay_off, 0644, led_delay_off_show, + led_delay_off_store); + +static void timer_trig_activate(struct led_classdev *led_cdev) +{ + struct timer_trig_data *timer_data; + + timer_data = kzalloc(sizeof(struct timer_trig_data), GFP_KERNEL); + if (!timer_data) + return; + + led_cdev->trigger_data = timer_data; + + init_timer(&timer_data->timer); + timer_data->timer.function = led_timer_function; + timer_data->timer.data = (unsigned long) led_cdev; + + class_device_create_file(led_cdev->class_dev, + &class_device_attr_delay_on); + class_device_create_file(led_cdev->class_dev, + &class_device_attr_delay_off); +} + +static void timer_trig_deactivate(struct led_classdev *led_cdev) +{ + struct timer_trig_data *timer_data = led_cdev->trigger_data; + + if (timer_data) { + class_device_remove_file(led_cdev->class_dev, + &class_device_attr_delay_on); + class_device_remove_file(led_cdev->class_dev, + &class_device_attr_delay_off); + del_timer_sync(&timer_data->timer); + kfree(timer_data); + } +} + +static struct led_trigger timer_led_trigger = { + .name = "timer", + .activate = timer_trig_activate, + .deactivate = timer_trig_deactivate, +}; + +static int __init timer_trig_init(void) +{ + return led_trigger_register(&timer_led_trigger); +} + +static void __exit timer_trig_exit(void) +{ + led_trigger_unregister(&timer_led_trigger); +} + +module_init(timer_trig_init); +module_exit(timer_trig_exit); + +MODULE_AUTHOR("Richard Purdie "); +MODULE_DESCRIPTION("Timer LED trigger"); +MODULE_LICENSE("GPL"); diff --git a/trunk/drivers/md/md.c b/trunk/drivers/md/md.c index 039e071c1007..1ed5152db450 100644 --- a/trunk/drivers/md/md.c +++ b/trunk/drivers/md/md.c @@ -215,13 +215,11 @@ static void mddev_put(mddev_t *mddev) return; if (!mddev->raid_disks && list_empty(&mddev->disks)) { list_del(&mddev->all_mddevs); - /* that blocks */ + spin_unlock(&all_mddevs_lock); blk_cleanup_queue(mddev->queue); - /* that also blocks */ kobject_unregister(&mddev->kobj); - /* result blows... */ - } - spin_unlock(&all_mddevs_lock); + } else + spin_unlock(&all_mddevs_lock); } static mddev_t * mddev_find(dev_t unit) diff --git a/trunk/drivers/md/raid1.c b/trunk/drivers/md/raid1.c index 3cb0872a845d..9b374c91db66 100644 --- a/trunk/drivers/md/raid1.c +++ b/trunk/drivers/md/raid1.c @@ -1135,8 +1135,19 @@ static int end_sync_write(struct bio *bio, unsigned int bytes_done, int error) mirror = i; break; } - if (!uptodate) + if (!uptodate) { + int sync_blocks = 0; + sector_t s = r1_bio->sector; + long sectors_to_go = r1_bio->sectors; + /* make sure these bits doesn't get cleared. */ + do { + bitmap_end_sync(mddev->bitmap, r1_bio->sector, + &sync_blocks, 1); + s += sync_blocks; + sectors_to_go -= sync_blocks; + } while (sectors_to_go > 0); md_error(mddev, conf->mirrors[mirror].rdev); + } update_head_pos(mirror, r1_bio); diff --git a/trunk/drivers/md/raid6main.c b/trunk/drivers/md/raid6main.c index 6df4930fddec..ab64b37e4996 100644 --- a/trunk/drivers/md/raid6main.c +++ b/trunk/drivers/md/raid6main.c @@ -2151,6 +2151,8 @@ static int run(mddev_t *mddev) } /* Ok, everything is just fine now */ + sysfs_create_group(&mddev->kobj, &raid6_attrs_group); + mddev->array_size = mddev->size * (mddev->raid_disks - 2); mddev->queue->unplug_fn = raid6_unplug_device; diff --git a/trunk/drivers/media/video/cpia_pp.c b/trunk/drivers/media/video/cpia_pp.c index 3021f21aae36..0b00e6027dfb 100644 --- a/trunk/drivers/media/video/cpia_pp.c +++ b/trunk/drivers/media/video/cpia_pp.c @@ -873,7 +873,7 @@ static int __init cpia_pp_setup(char *str) parport_nr[parport_ptr++] = PPCPIA_PARPORT_NONE; } - return 0; + return 1; } __setup("cpia_pp=", cpia_pp_setup); diff --git a/trunk/drivers/mmc/Kconfig b/trunk/drivers/mmc/Kconfig index 3f5d77f633fa..7cc162e8978b 100644 --- a/trunk/drivers/mmc/Kconfig +++ b/trunk/drivers/mmc/Kconfig @@ -60,6 +60,17 @@ config MMC_SDHCI If unsure, say N. +config MMC_OMAP + tristate "TI OMAP Multimedia Card Interface support" + depends on ARCH_OMAP && MMC + select TPS65010 if MACH_OMAP_H2 + help + This selects the TI OMAP Multimedia card Interface. + If you have an OMAP board with a Multimedia Card slot, + say Y or M here. + + If unsure, say N. + config MMC_WBSD tristate "Winbond W83L51xD SD/MMC Card Interface support" depends on MMC && ISA_DMA_API diff --git a/trunk/drivers/mmc/Makefile b/trunk/drivers/mmc/Makefile index 769d545284a4..c7c34aadfc92 100644 --- a/trunk/drivers/mmc/Makefile +++ b/trunk/drivers/mmc/Makefile @@ -20,5 +20,10 @@ obj-$(CONFIG_MMC_PXA) += pxamci.o obj-$(CONFIG_MMC_SDHCI) += sdhci.o obj-$(CONFIG_MMC_WBSD) += wbsd.o obj-$(CONFIG_MMC_AU1X) += au1xmmc.o +obj-$(CONFIG_MMC_OMAP) += omap.o mmc_core-y := mmc.o mmc_queue.o mmc_sysfs.o + +ifeq ($(CONFIG_MMC_DEBUG),y) +EXTRA_CFLAGS += -DDEBUG +endif diff --git a/trunk/drivers/mmc/au1xmmc.c b/trunk/drivers/mmc/au1xmmc.c index 85e89c77bdea..c0326bbc5f28 100644 --- a/trunk/drivers/mmc/au1xmmc.c +++ b/trunk/drivers/mmc/au1xmmc.c @@ -56,12 +56,11 @@ #define DRIVER_NAME "au1xxx-mmc" /* Set this to enable special debugging macros */ -/* #define MMC_DEBUG */ -#ifdef MMC_DEBUG -#define DEBUG(fmt, idx, args...) printk("au1xx(%d): DEBUG: " fmt, idx, ##args) +#ifdef DEBUG +#define DBG(fmt, idx, args...) printk("au1xx(%d): DEBUG: " fmt, idx, ##args) #else -#define DEBUG(fmt, idx, args...) +#define DBG(fmt, idx, args...) #endif const struct { @@ -424,18 +423,18 @@ static void au1xmmc_receive_pio(struct au1xmmc_host *host) break; if (status & SD_STATUS_RC) { - DEBUG("RX CRC Error [%d + %d].\n", host->id, + DBG("RX CRC Error [%d + %d].\n", host->id, host->pio.len, count); break; } if (status & SD_STATUS_RO) { - DEBUG("RX Overrun [%d + %d]\n", host->id, + DBG("RX Overrun [%d + %d]\n", host->id, host->pio.len, count); break; } else if (status & SD_STATUS_RU) { - DEBUG("RX Underrun [%d + %d]\n", host->id, + DBG("RX Underrun [%d + %d]\n", host->id, host->pio.len, count); break; } @@ -721,7 +720,7 @@ static void au1xmmc_set_ios(struct mmc_host* mmc, struct mmc_ios* ios) { struct au1xmmc_host *host = mmc_priv(mmc); - DEBUG("set_ios (power=%u, clock=%uHz, vdd=%u, mode=%u)\n", + DBG("set_ios (power=%u, clock=%uHz, vdd=%u, mode=%u)\n", host->id, ios->power_mode, ios->clock, ios->vdd, ios->bus_mode); @@ -810,7 +809,7 @@ static irqreturn_t au1xmmc_irq(int irq, void *dev_id, struct pt_regs *regs) au1xmmc_receive_pio(host); } else if (status & 0x203FBC70) { - DEBUG("Unhandled status %8.8x\n", host->id, status); + DBG("Unhandled status %8.8x\n", host->id, status); handled = 0; } @@ -839,7 +838,7 @@ static void au1xmmc_poll_event(unsigned long arg) if (host->mrq != NULL) { u32 status = au_readl(HOST_STATUS(host)); - DEBUG("PENDING - %8.8x\n", host->id, status); + DBG("PENDING - %8.8x\n", host->id, status); } mod_timer(&host->timer, jiffies + AU1XMMC_DETECT_TIMEOUT); diff --git a/trunk/drivers/mmc/mmc.c b/trunk/drivers/mmc/mmc.c index 1888060c5e0c..da6ddd910fc5 100644 --- a/trunk/drivers/mmc/mmc.c +++ b/trunk/drivers/mmc/mmc.c @@ -27,12 +27,6 @@ #include "mmc.h" -#ifdef CONFIG_MMC_DEBUG -#define DBG(x...) printk(KERN_DEBUG x) -#else -#define DBG(x...) do { } while (0) -#endif - #define CMD_RETRIES 3 /* @@ -77,8 +71,9 @@ void mmc_request_done(struct mmc_host *host, struct mmc_request *mrq) { struct mmc_command *cmd = mrq->cmd; int err = mrq->cmd->error; - DBG("MMC: req done (%02x): %d: %08x %08x %08x %08x\n", cmd->opcode, - err, cmd->resp[0], cmd->resp[1], cmd->resp[2], cmd->resp[3]); + pr_debug("MMC: req done (%02x): %d: %08x %08x %08x %08x\n", + cmd->opcode, err, cmd->resp[0], cmd->resp[1], + cmd->resp[2], cmd->resp[3]); if (err && cmd->retries) { cmd->retries--; @@ -102,8 +97,8 @@ EXPORT_SYMBOL(mmc_request_done); void mmc_start_request(struct mmc_host *host, struct mmc_request *mrq) { - DBG("MMC: starting cmd %02x arg %08x flags %08x\n", - mrq->cmd->opcode, mrq->cmd->arg, mrq->cmd->flags); + pr_debug("MMC: starting cmd %02x arg %08x flags %08x\n", + mrq->cmd->opcode, mrq->cmd->arg, mrq->cmd->flags); WARN_ON(host->card_busy == NULL); @@ -976,8 +971,8 @@ static unsigned int mmc_calculate_clock(struct mmc_host *host) if (!mmc_card_dead(card) && max_dtr > card->csd.max_dtr) max_dtr = card->csd.max_dtr; - DBG("MMC: selected %d.%03dMHz transfer rate\n", - max_dtr / 1000000, (max_dtr / 1000) % 1000); + pr_debug("MMC: selected %d.%03dMHz transfer rate\n", + max_dtr / 1000000, (max_dtr / 1000) % 1000); return max_dtr; } diff --git a/trunk/drivers/mmc/mmci.c b/trunk/drivers/mmc/mmci.c index 9fef29d978b5..df7e861e2fc7 100644 --- a/trunk/drivers/mmc/mmci.c +++ b/trunk/drivers/mmc/mmci.c @@ -33,12 +33,8 @@ #define DRIVER_NAME "mmci-pl18x" -#ifdef CONFIG_MMC_DEBUG #define DBG(host,fmt,args...) \ pr_debug("%s: %s: " fmt, mmc_hostname(host->mmc), __func__ , args) -#else -#define DBG(host,fmt,args...) do { } while (0) -#endif static unsigned int fmax = 515633; diff --git a/trunk/drivers/mmc/omap.c b/trunk/drivers/mmc/omap.c new file mode 100644 index 000000000000..becb3c68c34d --- /dev/null +++ b/trunk/drivers/mmc/omap.c @@ -0,0 +1,1226 @@ +/* + * linux/drivers/media/mmc/omap.c + * + * Copyright (C) 2004 Nokia Corporation + * Written by Tuukka Tikkanen and Juha Yrjölä + * Misc hacks here and there by Tony Lindgren + * Other hacks (DMA, SD, etc) by David Brownell + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include + +#include "omap.h" + +#define DRIVER_NAME "mmci-omap" +#define RSP_TYPE(x) ((x) & ~(MMC_RSP_BUSY|MMC_RSP_OPCODE)) + +/* Specifies how often in millisecs to poll for card status changes + * when the cover switch is open */ +#define OMAP_MMC_SWITCH_POLL_DELAY 500 + +static int mmc_omap_enable_poll = 1; + +struct mmc_omap_host { + int initialized; + int suspended; + struct mmc_request * mrq; + struct mmc_command * cmd; + struct mmc_data * data; + struct mmc_host * mmc; + struct device * dev; + unsigned char id; /* 16xx chips have 2 MMC blocks */ + struct clk * iclk; + struct clk * fclk; + void __iomem *base; + int irq; + unsigned char bus_mode; + unsigned char hw_bus_mode; + + unsigned int sg_len; + int sg_idx; + u16 * buffer; + u32 buffer_bytes_left; + u32 total_bytes_left; + + unsigned use_dma:1; + unsigned brs_received:1, dma_done:1; + unsigned dma_is_read:1; + unsigned dma_in_use:1; + int dma_ch; + spinlock_t dma_lock; + struct timer_list dma_timer; + unsigned dma_len; + + short power_pin; + short wp_pin; + + int switch_pin; + struct work_struct switch_work; + struct timer_list switch_timer; + int switch_last_state; +}; + +static inline int +mmc_omap_cover_is_open(struct mmc_omap_host *host) +{ + if (host->switch_pin < 0) + return 0; + return omap_get_gpio_datain(host->switch_pin); +} + +static ssize_t +mmc_omap_show_cover_switch(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct mmc_omap_host *host = dev_get_drvdata(dev); + + return sprintf(buf, "%s\n", mmc_omap_cover_is_open(host) ? "open" : + "closed"); +} + +static DEVICE_ATTR(cover_switch, S_IRUGO, mmc_omap_show_cover_switch, NULL); + +static ssize_t +mmc_omap_show_enable_poll(struct device *dev, + struct device_attribute *attr, char *buf) +{ + return snprintf(buf, PAGE_SIZE, "%d\n", mmc_omap_enable_poll); +} + +static ssize_t +mmc_omap_store_enable_poll(struct device *dev, + struct device_attribute *attr, const char *buf, + size_t size) +{ + int enable_poll; + + if (sscanf(buf, "%10d", &enable_poll) != 1) + return -EINVAL; + + if (enable_poll != mmc_omap_enable_poll) { + struct mmc_omap_host *host = dev_get_drvdata(dev); + + mmc_omap_enable_poll = enable_poll; + if (enable_poll && host->switch_pin >= 0) + schedule_work(&host->switch_work); + } + return size; +} + +static DEVICE_ATTR(enable_poll, 0664, + mmc_omap_show_enable_poll, mmc_omap_store_enable_poll); + +static void +mmc_omap_start_command(struct mmc_omap_host *host, struct mmc_command *cmd) +{ + u32 cmdreg; + u32 resptype; + u32 cmdtype; + + host->cmd = cmd; + + resptype = 0; + cmdtype = 0; + + /* Our hardware needs to know exact type */ + switch (RSP_TYPE(mmc_resp_type(cmd))) { + case RSP_TYPE(MMC_RSP_R1): + /* resp 1, resp 1b */ + resptype = 1; + break; + case RSP_TYPE(MMC_RSP_R2): + resptype = 2; + break; + case RSP_TYPE(MMC_RSP_R3): + resptype = 3; + break; + default: + break; + } + + if (mmc_cmd_type(cmd) == MMC_CMD_ADTC) { + cmdtype = OMAP_MMC_CMDTYPE_ADTC; + } else if (mmc_cmd_type(cmd) == MMC_CMD_BC) { + cmdtype = OMAP_MMC_CMDTYPE_BC; + } else if (mmc_cmd_type(cmd) == MMC_CMD_BCR) { + cmdtype = OMAP_MMC_CMDTYPE_BCR; + } else { + cmdtype = OMAP_MMC_CMDTYPE_AC; + } + + cmdreg = cmd->opcode | (resptype << 8) | (cmdtype << 12); + + if (host->bus_mode == MMC_BUSMODE_OPENDRAIN) + cmdreg |= 1 << 6; + + if (cmd->flags & MMC_RSP_BUSY) + cmdreg |= 1 << 11; + + if (host->data && !(host->data->flags & MMC_DATA_WRITE)) + cmdreg |= 1 << 15; + + clk_enable(host->fclk); + + OMAP_MMC_WRITE(host->base, CTO, 200); + OMAP_MMC_WRITE(host->base, ARGL, cmd->arg & 0xffff); + OMAP_MMC_WRITE(host->base, ARGH, cmd->arg >> 16); + OMAP_MMC_WRITE(host->base, IE, + OMAP_MMC_STAT_A_EMPTY | OMAP_MMC_STAT_A_FULL | + OMAP_MMC_STAT_CMD_CRC | OMAP_MMC_STAT_CMD_TOUT | + OMAP_MMC_STAT_DATA_CRC | OMAP_MMC_STAT_DATA_TOUT | + OMAP_MMC_STAT_END_OF_CMD | OMAP_MMC_STAT_CARD_ERR | + OMAP_MMC_STAT_END_OF_DATA); + OMAP_MMC_WRITE(host->base, CMD, cmdreg); +} + +static void +mmc_omap_xfer_done(struct mmc_omap_host *host, struct mmc_data *data) +{ + if (host->dma_in_use) { + enum dma_data_direction dma_data_dir; + + BUG_ON(host->dma_ch < 0); + if (data->error != MMC_ERR_NONE) + omap_stop_dma(host->dma_ch); + /* Release DMA channel lazily */ + mod_timer(&host->dma_timer, jiffies + HZ); + if (data->flags & MMC_DATA_WRITE) + dma_data_dir = DMA_TO_DEVICE; + else + dma_data_dir = DMA_FROM_DEVICE; + dma_unmap_sg(mmc_dev(host->mmc), data->sg, host->sg_len, + dma_data_dir); + } + host->data = NULL; + host->sg_len = 0; + clk_disable(host->fclk); + + /* NOTE: MMC layer will sometimes poll-wait CMD13 next, issuing + * dozens of requests until the card finishes writing data. + * It'd be cheaper to just wait till an EOFB interrupt arrives... + */ + + if (!data->stop) { + host->mrq = NULL; + mmc_request_done(host->mmc, data->mrq); + return; + } + + mmc_omap_start_command(host, data->stop); +} + +static void +mmc_omap_end_of_data(struct mmc_omap_host *host, struct mmc_data *data) +{ + unsigned long flags; + int done; + + if (!host->dma_in_use) { + mmc_omap_xfer_done(host, data); + return; + } + done = 0; + spin_lock_irqsave(&host->dma_lock, flags); + if (host->dma_done) + done = 1; + else + host->brs_received = 1; + spin_unlock_irqrestore(&host->dma_lock, flags); + if (done) + mmc_omap_xfer_done(host, data); +} + +static void +mmc_omap_dma_timer(unsigned long data) +{ + struct mmc_omap_host *host = (struct mmc_omap_host *) data; + + BUG_ON(host->dma_ch < 0); + omap_free_dma(host->dma_ch); + host->dma_ch = -1; +} + +static void +mmc_omap_dma_done(struct mmc_omap_host *host, struct mmc_data *data) +{ + unsigned long flags; + int done; + + done = 0; + spin_lock_irqsave(&host->dma_lock, flags); + if (host->brs_received) + done = 1; + else + host->dma_done = 1; + spin_unlock_irqrestore(&host->dma_lock, flags); + if (done) + mmc_omap_xfer_done(host, data); +} + +static void +mmc_omap_cmd_done(struct mmc_omap_host *host, struct mmc_command *cmd) +{ + host->cmd = NULL; + + if (cmd->flags & MMC_RSP_PRESENT) { + if (cmd->flags & MMC_RSP_136) { + /* response type 2 */ + cmd->resp[3] = + OMAP_MMC_READ(host->base, RSP0) | + (OMAP_MMC_READ(host->base, RSP1) << 16); + cmd->resp[2] = + OMAP_MMC_READ(host->base, RSP2) | + (OMAP_MMC_READ(host->base, RSP3) << 16); + cmd->resp[1] = + OMAP_MMC_READ(host->base, RSP4) | + (OMAP_MMC_READ(host->base, RSP5) << 16); + cmd->resp[0] = + OMAP_MMC_READ(host->base, RSP6) | + (OMAP_MMC_READ(host->base, RSP7) << 16); + } else { + /* response types 1, 1b, 3, 4, 5, 6 */ + cmd->resp[0] = + OMAP_MMC_READ(host->base, RSP6) | + (OMAP_MMC_READ(host->base, RSP7) << 16); + } + } + + if (host->data == NULL || cmd->error != MMC_ERR_NONE) { + host->mrq = NULL; + clk_disable(host->fclk); + mmc_request_done(host->mmc, cmd->mrq); + } +} + +/* PIO only */ +static void +mmc_omap_sg_to_buf(struct mmc_omap_host *host) +{ + struct scatterlist *sg; + + sg = host->data->sg + host->sg_idx; + host->buffer_bytes_left = sg->length; + host->buffer = page_address(sg->page) + sg->offset; + if (host->buffer_bytes_left > host->total_bytes_left) + host->buffer_bytes_left = host->total_bytes_left; +} + +/* PIO only */ +static void +mmc_omap_xfer_data(struct mmc_omap_host *host, int write) +{ + int n; + void __iomem *reg; + u16 *p; + + if (host->buffer_bytes_left == 0) { + host->sg_idx++; + BUG_ON(host->sg_idx == host->sg_len); + mmc_omap_sg_to_buf(host); + } + n = 64; + if (n > host->buffer_bytes_left) + n = host->buffer_bytes_left; + host->buffer_bytes_left -= n; + host->total_bytes_left -= n; + host->data->bytes_xfered += n; + + if (write) { + __raw_writesw(host->base + OMAP_MMC_REG_DATA, host->buffer, n); + } else { + __raw_readsw(host->base + OMAP_MMC_REG_DATA, host->buffer, n); + } +} + +static inline void mmc_omap_report_irq(u16 status) +{ + static const char *mmc_omap_status_bits[] = { + "EOC", "CD", "CB", "BRS", "EOFB", "DTO", "DCRC", "CTO", + "CCRC", "CRW", "AF", "AE", "OCRB", "CIRQ", "CERR" + }; + int i, c = 0; + + for (i = 0; i < ARRAY_SIZE(mmc_omap_status_bits); i++) + if (status & (1 << i)) { + if (c) + printk(" "); + printk("%s", mmc_omap_status_bits[i]); + c++; + } +} + +static irqreturn_t mmc_omap_irq(int irq, void *dev_id, struct pt_regs *regs) +{ + struct mmc_omap_host * host = (struct mmc_omap_host *)dev_id; + u16 status; + int end_command; + int end_transfer; + int transfer_error; + + if (host->cmd == NULL && host->data == NULL) { + status = OMAP_MMC_READ(host->base, STAT); + dev_info(mmc_dev(host->mmc),"spurious irq 0x%04x\n", status); + if (status != 0) { + OMAP_MMC_WRITE(host->base, STAT, status); + OMAP_MMC_WRITE(host->base, IE, 0); + } + return IRQ_HANDLED; + } + + end_command = 0; + end_transfer = 0; + transfer_error = 0; + + while ((status = OMAP_MMC_READ(host->base, STAT)) != 0) { + OMAP_MMC_WRITE(host->base, STAT, status); +#ifdef CONFIG_MMC_DEBUG + dev_dbg(mmc_dev(host->mmc), "MMC IRQ %04x (CMD %d): ", + status, host->cmd != NULL ? host->cmd->opcode : -1); + mmc_omap_report_irq(status); + printk("\n"); +#endif + if (host->total_bytes_left) { + if ((status & OMAP_MMC_STAT_A_FULL) || + (status & OMAP_MMC_STAT_END_OF_DATA)) + mmc_omap_xfer_data(host, 0); + if (status & OMAP_MMC_STAT_A_EMPTY) + mmc_omap_xfer_data(host, 1); + } + + if (status & OMAP_MMC_STAT_END_OF_DATA) { + end_transfer = 1; + } + + if (status & OMAP_MMC_STAT_DATA_TOUT) { + dev_dbg(mmc_dev(host->mmc), "data timeout\n"); + if (host->data) { + host->data->error |= MMC_ERR_TIMEOUT; + transfer_error = 1; + } + } + + if (status & OMAP_MMC_STAT_DATA_CRC) { + if (host->data) { + host->data->error |= MMC_ERR_BADCRC; + dev_dbg(mmc_dev(host->mmc), + "data CRC error, bytes left %d\n", + host->total_bytes_left); + transfer_error = 1; + } else { + dev_dbg(mmc_dev(host->mmc), "data CRC error\n"); + } + } + + if (status & OMAP_MMC_STAT_CMD_TOUT) { + /* Timeouts are routine with some commands */ + if (host->cmd) { + if (host->cmd->opcode != MMC_ALL_SEND_CID && + host->cmd->opcode != + MMC_SEND_OP_COND && + host->cmd->opcode != + MMC_APP_CMD && + !mmc_omap_cover_is_open(host)) + dev_err(mmc_dev(host->mmc), + "command timeout, CMD %d\n", + host->cmd->opcode); + host->cmd->error = MMC_ERR_TIMEOUT; + end_command = 1; + } + } + + if (status & OMAP_MMC_STAT_CMD_CRC) { + if (host->cmd) { + dev_err(mmc_dev(host->mmc), + "command CRC error (CMD%d, arg 0x%08x)\n", + host->cmd->opcode, host->cmd->arg); + host->cmd->error = MMC_ERR_BADCRC; + end_command = 1; + } else + dev_err(mmc_dev(host->mmc), + "command CRC error without cmd?\n"); + } + + if (status & OMAP_MMC_STAT_CARD_ERR) { + if (host->cmd && host->cmd->opcode == MMC_STOP_TRANSMISSION) { + u32 response = OMAP_MMC_READ(host->base, RSP6) + | (OMAP_MMC_READ(host->base, RSP7) << 16); + /* STOP sometimes sets must-ignore bits */ + if (!(response & (R1_CC_ERROR + | R1_ILLEGAL_COMMAND + | R1_COM_CRC_ERROR))) { + end_command = 1; + continue; + } + } + + dev_dbg(mmc_dev(host->mmc), "card status error (CMD%d)\n", + host->cmd->opcode); + if (host->cmd) { + host->cmd->error = MMC_ERR_FAILED; + end_command = 1; + } + if (host->data) { + host->data->error = MMC_ERR_FAILED; + transfer_error = 1; + } + } + + /* + * NOTE: On 1610 the END_OF_CMD may come too early when + * starting a write + */ + if ((status & OMAP_MMC_STAT_END_OF_CMD) && + (!(status & OMAP_MMC_STAT_A_EMPTY))) { + end_command = 1; + } + } + + if (end_command) { + mmc_omap_cmd_done(host, host->cmd); + } + if (transfer_error) + mmc_omap_xfer_done(host, host->data); + else if (end_transfer) + mmc_omap_end_of_data(host, host->data); + + return IRQ_HANDLED; +} + +static irqreturn_t mmc_omap_switch_irq(int irq, void *dev_id, struct pt_regs *regs) +{ + struct mmc_omap_host *host = (struct mmc_omap_host *) dev_id; + + schedule_work(&host->switch_work); + + return IRQ_HANDLED; +} + +static void mmc_omap_switch_timer(unsigned long arg) +{ + struct mmc_omap_host *host = (struct mmc_omap_host *) arg; + + schedule_work(&host->switch_work); +} + +/* FIXME: Handle card insertion and removal properly. Maybe use a mask + * for MMC state? */ +static void mmc_omap_switch_callback(unsigned long data, u8 mmc_mask) +{ +} + +static void mmc_omap_switch_handler(void *data) +{ + struct mmc_omap_host *host = (struct mmc_omap_host *) data; + struct mmc_card *card; + static int complained = 0; + int cards = 0, cover_open; + + if (host->switch_pin == -1) + return; + cover_open = mmc_omap_cover_is_open(host); + if (cover_open != host->switch_last_state) { + kobject_uevent(&host->dev->kobj, KOBJ_CHANGE); + host->switch_last_state = cover_open; + } + mmc_detect_change(host->mmc, 0); + list_for_each_entry(card, &host->mmc->cards, node) { + if (mmc_card_present(card)) + cards++; + } + if (mmc_omap_cover_is_open(host)) { + if (!complained) { + dev_info(mmc_dev(host->mmc), "cover is open"); + complained = 1; + } + if (mmc_omap_enable_poll) + mod_timer(&host->switch_timer, jiffies + + msecs_to_jiffies(OMAP_MMC_SWITCH_POLL_DELAY)); + } else { + complained = 0; + } +} + +/* Prepare to transfer the next segment of a scatterlist */ +static void +mmc_omap_prepare_dma(struct mmc_omap_host *host, struct mmc_data *data) +{ + int dma_ch = host->dma_ch; + unsigned long data_addr; + u16 buf, frame; + u32 count; + struct scatterlist *sg = &data->sg[host->sg_idx]; + int src_port = 0; + int dst_port = 0; + int sync_dev = 0; + + data_addr = io_v2p((u32) host->base) + OMAP_MMC_REG_DATA; + frame = 1 << data->blksz_bits; + count = sg_dma_len(sg); + + if ((data->blocks == 1) && (count > (1 << data->blksz_bits))) + count = frame; + + host->dma_len = count; + + /* FIFO is 16x2 bytes on 15xx, and 32x2 bytes on 16xx and 24xx. + * Use 16 or 32 word frames when the blocksize is at least that large. + * Blocksize is usually 512 bytes; but not for some SD reads. + */ + if (cpu_is_omap15xx() && frame > 32) + frame = 32; + else if (frame > 64) + frame = 64; + count /= frame; + frame >>= 1; + + if (!(data->flags & MMC_DATA_WRITE)) { + buf = 0x800f | ((frame - 1) << 8); + + if (cpu_class_is_omap1()) { + src_port = OMAP_DMA_PORT_TIPB; + dst_port = OMAP_DMA_PORT_EMIFF; + } + if (cpu_is_omap24xx()) + sync_dev = OMAP24XX_DMA_MMC1_RX; + + omap_set_dma_src_params(dma_ch, src_port, + OMAP_DMA_AMODE_CONSTANT, + data_addr, 0, 0); + omap_set_dma_dest_params(dma_ch, dst_port, + OMAP_DMA_AMODE_POST_INC, + sg_dma_address(sg), 0, 0); + omap_set_dma_dest_data_pack(dma_ch, 1); + omap_set_dma_dest_burst_mode(dma_ch, OMAP_DMA_DATA_BURST_4); + } else { + buf = 0x0f80 | ((frame - 1) << 0); + + if (cpu_class_is_omap1()) { + src_port = OMAP_DMA_PORT_EMIFF; + dst_port = OMAP_DMA_PORT_TIPB; + } + if (cpu_is_omap24xx()) + sync_dev = OMAP24XX_DMA_MMC1_TX; + + omap_set_dma_dest_params(dma_ch, dst_port, + OMAP_DMA_AMODE_CONSTANT, + data_addr, 0, 0); + omap_set_dma_src_params(dma_ch, src_port, + OMAP_DMA_AMODE_POST_INC, + sg_dma_address(sg), 0, 0); + omap_set_dma_src_data_pack(dma_ch, 1); + omap_set_dma_src_burst_mode(dma_ch, OMAP_DMA_DATA_BURST_4); + } + + /* Max limit for DMA frame count is 0xffff */ + if (unlikely(count > 0xffff)) + BUG(); + + OMAP_MMC_WRITE(host->base, BUF, buf); + omap_set_dma_transfer_params(dma_ch, OMAP_DMA_DATA_TYPE_S16, + frame, count, OMAP_DMA_SYNC_FRAME, + sync_dev, 0); +} + +/* A scatterlist segment completed */ +static void mmc_omap_dma_cb(int lch, u16 ch_status, void *data) +{ + struct mmc_omap_host *host = (struct mmc_omap_host *) data; + struct mmc_data *mmcdat = host->data; + + if (unlikely(host->dma_ch < 0)) { + dev_err(mmc_dev(host->mmc), "DMA callback while DMA not + enabled\n"); + return; + } + /* FIXME: We really should do something to _handle_ the errors */ + if (ch_status & OMAP_DMA_TOUT_IRQ) { + dev_err(mmc_dev(host->mmc),"DMA timeout\n"); + return; + } + if (ch_status & OMAP_DMA_DROP_IRQ) { + dev_err(mmc_dev(host->mmc), "DMA sync error\n"); + return; + } + if (!(ch_status & OMAP_DMA_BLOCK_IRQ)) { + return; + } + mmcdat->bytes_xfered += host->dma_len; + host->sg_idx++; + if (host->sg_idx < host->sg_len) { + mmc_omap_prepare_dma(host, host->data); + omap_start_dma(host->dma_ch); + } else + mmc_omap_dma_done(host, host->data); +} + +static int mmc_omap_get_dma_channel(struct mmc_omap_host *host, struct mmc_data *data) +{ + const char *dev_name; + int sync_dev, dma_ch, is_read, r; + + is_read = !(data->flags & MMC_DATA_WRITE); + del_timer_sync(&host->dma_timer); + if (host->dma_ch >= 0) { + if (is_read == host->dma_is_read) + return 0; + omap_free_dma(host->dma_ch); + host->dma_ch = -1; + } + + if (is_read) { + if (host->id == 1) { + sync_dev = OMAP_DMA_MMC_RX; + dev_name = "MMC1 read"; + } else { + sync_dev = OMAP_DMA_MMC2_RX; + dev_name = "MMC2 read"; + } + } else { + if (host->id == 1) { + sync_dev = OMAP_DMA_MMC_TX; + dev_name = "MMC1 write"; + } else { + sync_dev = OMAP_DMA_MMC2_TX; + dev_name = "MMC2 write"; + } + } + r = omap_request_dma(sync_dev, dev_name, mmc_omap_dma_cb, + host, &dma_ch); + if (r != 0) { + dev_dbg(mmc_dev(host->mmc), "omap_request_dma() failed with %d\n", r); + return r; + } + host->dma_ch = dma_ch; + host->dma_is_read = is_read; + + return 0; +} + +static inline void set_cmd_timeout(struct mmc_omap_host *host, struct mmc_request *req) +{ + u16 reg; + + reg = OMAP_MMC_READ(host->base, SDIO); + reg &= ~(1 << 5); + OMAP_MMC_WRITE(host->base, SDIO, reg); + /* Set maximum timeout */ + OMAP_MMC_WRITE(host->base, CTO, 0xff); +} + +static inline void set_data_timeout(struct mmc_omap_host *host, struct mmc_request *req) +{ + int timeout; + u16 reg; + + /* Convert ns to clock cycles by assuming 20MHz frequency + * 1 cycle at 20MHz = 500 ns + */ + timeout = req->data->timeout_clks + req->data->timeout_ns / 500; + + /* Check if we need to use timeout multiplier register */ + reg = OMAP_MMC_READ(host->base, SDIO); + if (timeout > 0xffff) { + reg |= (1 << 5); + timeout /= 1024; + } else + reg &= ~(1 << 5); + OMAP_MMC_WRITE(host->base, SDIO, reg); + OMAP_MMC_WRITE(host->base, DTO, timeout); +} + +static void +mmc_omap_prepare_data(struct mmc_omap_host *host, struct mmc_request *req) +{ + struct mmc_data *data = req->data; + int i, use_dma, block_size; + unsigned sg_len; + + host->data = data; + if (data == NULL) { + OMAP_MMC_WRITE(host->base, BLEN, 0); + OMAP_MMC_WRITE(host->base, NBLK, 0); + OMAP_MMC_WRITE(host->base, BUF, 0); + host->dma_in_use = 0; + set_cmd_timeout(host, req); + return; + } + + + block_size = 1 << data->blksz_bits; + + OMAP_MMC_WRITE(host->base, NBLK, data->blocks - 1); + OMAP_MMC_WRITE(host->base, BLEN, block_size - 1); + set_data_timeout(host, req); + + /* cope with calling layer confusion; it issues "single + * block" writes using multi-block scatterlists. + */ + sg_len = (data->blocks == 1) ? 1 : data->sg_len; + + /* Only do DMA for entire blocks */ + use_dma = host->use_dma; + if (use_dma) { + for (i = 0; i < sg_len; i++) { + if ((data->sg[i].length % block_size) != 0) { + use_dma = 0; + break; + } + } + } + + host->sg_idx = 0; + if (use_dma) { + if (mmc_omap_get_dma_channel(host, data) == 0) { + enum dma_data_direction dma_data_dir; + + if (data->flags & MMC_DATA_WRITE) + dma_data_dir = DMA_TO_DEVICE; + else + dma_data_dir = DMA_FROM_DEVICE; + + host->sg_len = dma_map_sg(mmc_dev(host->mmc), data->sg, + sg_len, dma_data_dir); + host->total_bytes_left = 0; + mmc_omap_prepare_dma(host, req->data); + host->brs_received = 0; + host->dma_done = 0; + host->dma_in_use = 1; + } else + use_dma = 0; + } + + /* Revert to PIO? */ + if (!use_dma) { + OMAP_MMC_WRITE(host->base, BUF, 0x1f1f); + host->total_bytes_left = data->blocks * block_size; + host->sg_len = sg_len; + mmc_omap_sg_to_buf(host); + host->dma_in_use = 0; + } +} + +static void mmc_omap_request(struct mmc_host *mmc, struct mmc_request *req) +{ + struct mmc_omap_host *host = mmc_priv(mmc); + + WARN_ON(host->mrq != NULL); + + host->mrq = req; + + /* only touch fifo AFTER the controller readies it */ + mmc_omap_prepare_data(host, req); + mmc_omap_start_command(host, req->cmd); + if (host->dma_in_use) + omap_start_dma(host->dma_ch); +} + +static void innovator_fpga_socket_power(int on) +{ +#if defined(CONFIG_MACH_OMAP_INNOVATOR) && defined(CONFIG_ARCH_OMAP15XX) + + if (on) { + fpga_write(fpga_read(OMAP1510_FPGA_POWER) | (1 << 3), + OMAP1510_FPGA_POWER); + } else { + fpga_write(fpga_read(OMAP1510_FPGA_POWER) & ~(1 << 3), + OMAP1510_FPGA_POWER); + } +#endif +} + +/* + * Turn the socket power on/off. Innovator uses FPGA, most boards + * probably use GPIO. + */ +static void mmc_omap_power(struct mmc_omap_host *host, int on) +{ + if (on) { + if (machine_is_omap_innovator()) + innovator_fpga_socket_power(1); + else if (machine_is_omap_h2()) + tps65010_set_gpio_out_value(GPIO3, HIGH); + else if (machine_is_omap_h3()) + /* GPIO 4 of TPS65010 sends SD_EN signal */ + tps65010_set_gpio_out_value(GPIO4, HIGH); + else if (cpu_is_omap24xx()) { + u16 reg = OMAP_MMC_READ(host->base, CON); + OMAP_MMC_WRITE(host->base, CON, reg | (1 << 11)); + } else + if (host->power_pin >= 0) + omap_set_gpio_dataout(host->power_pin, 1); + } else { + if (machine_is_omap_innovator()) + innovator_fpga_socket_power(0); + else if (machine_is_omap_h2()) + tps65010_set_gpio_out_value(GPIO3, LOW); + else if (machine_is_omap_h3()) + tps65010_set_gpio_out_value(GPIO4, LOW); + else if (cpu_is_omap24xx()) { + u16 reg = OMAP_MMC_READ(host->base, CON); + OMAP_MMC_WRITE(host->base, CON, reg & ~(1 << 11)); + } else + if (host->power_pin >= 0) + omap_set_gpio_dataout(host->power_pin, 0); + } +} + +static void mmc_omap_set_ios(struct mmc_host *mmc, struct mmc_ios *ios) +{ + struct mmc_omap_host *host = mmc_priv(mmc); + int dsor; + int realclock, i; + + realclock = ios->clock; + + if (ios->clock == 0) + dsor = 0; + else { + int func_clk_rate = clk_get_rate(host->fclk); + + dsor = func_clk_rate / realclock; + if (dsor < 1) + dsor = 1; + + if (func_clk_rate / dsor > realclock) + dsor++; + + if (dsor > 250) + dsor = 250; + dsor++; + + if (ios->bus_width == MMC_BUS_WIDTH_4) + dsor |= 1 << 15; + } + + switch (ios->power_mode) { + case MMC_POWER_OFF: + mmc_omap_power(host, 0); + break; + case MMC_POWER_UP: + case MMC_POWER_ON: + mmc_omap_power(host, 1); + dsor |= 1<<11; + break; + } + + host->bus_mode = ios->bus_mode; + host->hw_bus_mode = host->bus_mode; + + clk_enable(host->fclk); + + /* On insanely high arm_per frequencies something sometimes + * goes somehow out of sync, and the POW bit is not being set, + * which results in the while loop below getting stuck. + * Writing to the CON register twice seems to do the trick. */ + for (i = 0; i < 2; i++) + OMAP_MMC_WRITE(host->base, CON, dsor); + if (ios->power_mode == MMC_POWER_UP) { + /* Send clock cycles, poll completion */ + OMAP_MMC_WRITE(host->base, IE, 0); + OMAP_MMC_WRITE(host->base, STAT, 0xffff); + OMAP_MMC_WRITE(host->base, CMD, 1<<7); + while (0 == (OMAP_MMC_READ(host->base, STAT) & 1)); + OMAP_MMC_WRITE(host->base, STAT, 1); + } + clk_disable(host->fclk); +} + +static int mmc_omap_get_ro(struct mmc_host *mmc) +{ + struct mmc_omap_host *host = mmc_priv(mmc); + + return host->wp_pin && omap_get_gpio_datain(host->wp_pin); +} + +static struct mmc_host_ops mmc_omap_ops = { + .request = mmc_omap_request, + .set_ios = mmc_omap_set_ios, + .get_ro = mmc_omap_get_ro, +}; + +static int __init mmc_omap_probe(struct platform_device *pdev) +{ + struct omap_mmc_conf *minfo = pdev->dev.platform_data; + struct mmc_host *mmc; + struct mmc_omap_host *host = NULL; + int ret = 0; + + if (platform_get_resource(pdev, IORESOURCE_MEM, 0) || + platform_get_irq(pdev, IORESOURCE_IRQ, 0)) { + dev_err(&pdev->dev, "mmc_omap_probe: invalid resource type\n"); + return -ENODEV; + } + + if (!request_mem_region(pdev->resource[0].start, + pdev->resource[0].end - pdev->resource[0].start + 1, + pdev->name)) { + dev_dbg(&pdev->dev, "request_mem_region failed\n"); + return -EBUSY; + } + + mmc = mmc_alloc_host(sizeof(struct mmc_omap_host), &pdev->dev); + if (!mmc) { + ret = -ENOMEM; + goto out; + } + + host = mmc_priv(mmc); + host->mmc = mmc; + + spin_lock_init(&host->dma_lock); + init_timer(&host->dma_timer); + host->dma_timer.function = mmc_omap_dma_timer; + host->dma_timer.data = (unsigned long) host; + + host->id = pdev->id; + + if (cpu_is_omap24xx()) { + host->iclk = clk_get(&pdev->dev, "mmc_ick"); + if (IS_ERR(host->iclk)) + goto out; + clk_enable(host->iclk); + } + + if (!cpu_is_omap24xx()) + host->fclk = clk_get(&pdev->dev, "mmc_ck"); + else + host->fclk = clk_get(&pdev->dev, "mmc_fck"); + + if (IS_ERR(host->fclk)) { + ret = PTR_ERR(host->fclk); + goto out; + } + + /* REVISIT: + * Also, use minfo->cover to decide how to manage + * the card detect sensing. + */ + host->power_pin = minfo->power_pin; + host->switch_pin = minfo->switch_pin; + host->wp_pin = minfo->wp_pin; + host->use_dma = 1; + host->dma_ch = -1; + + host->irq = pdev->resource[1].start; + host->base = ioremap(pdev->res.start, SZ_4K); + if (!host->base) { + ret = -ENOMEM; + goto out; + } + + if (minfo->wire4) + mmc->caps |= MMC_CAP_4_BIT_DATA; + + mmc->ops = &mmc_omap_ops; + mmc->f_min = 400000; + mmc->f_max = 24000000; + mmc->ocr_avail = MMC_VDD_32_33|MMC_VDD_33_34; + + /* Use scatterlist DMA to reduce per-transfer costs. + * NOTE max_seg_size assumption that small blocks aren't + * normally used (except e.g. for reading SD registers). + */ + mmc->max_phys_segs = 32; + mmc->max_hw_segs = 32; + mmc->max_sectors = 256; /* NBLK max 11-bits, OMAP also limited by DMA */ + mmc->max_seg_size = mmc->max_sectors * 512; + + if (host->power_pin >= 0) { + if ((ret = omap_request_gpio(host->power_pin)) != 0) { + dev_err(mmc_dev(host->mmc), "Unable to get GPIO + pin for MMC power\n"); + goto out; + } + omap_set_gpio_direction(host->power_pin, 0); + } + + ret = request_irq(host->irq, mmc_omap_irq, 0, DRIVER_NAME, host); + if (ret) + goto out; + + host->dev = &pdev->dev; + platform_set_drvdata(pdev, host); + + mmc_add_host(mmc); + + if (host->switch_pin >= 0) { + INIT_WORK(&host->switch_work, mmc_omap_switch_handler, host); + init_timer(&host->switch_timer); + host->switch_timer.function = mmc_omap_switch_timer; + host->switch_timer.data = (unsigned long) host; + if (omap_request_gpio(host->switch_pin) != 0) { + dev_warn(mmc_dev(host->mmc), "Unable to get GPIO pin for MMC cover switch\n"); + host->switch_pin = -1; + goto no_switch; + } + + omap_set_gpio_direction(host->switch_pin, 1); + ret = request_irq(OMAP_GPIO_IRQ(host->switch_pin), + mmc_omap_switch_irq, SA_TRIGGER_RISING, DRIVER_NAME, host); + if (ret) { + dev_warn(mmc_dev(host->mmc), "Unable to get IRQ for MMC cover switch\n"); + omap_free_gpio(host->switch_pin); + host->switch_pin = -1; + goto no_switch; + } + ret = device_create_file(&pdev->dev, &dev_attr_cover_switch); + if (ret == 0) { + ret = device_create_file(&pdev->dev, &dev_attr_enable_poll); + if (ret != 0) + device_remove_file(&pdev->dev, &dev_attr_cover_switch); + } + if (ret) { + dev_wan(mmc_dev(host->mmc), "Unable to create sysfs attributes\n"); + free_irq(OMAP_GPIO_IRQ(host->switch_pin), host); + omap_free_gpio(host->switch_pin); + host->switch_pin = -1; + goto no_switch; + } + if (mmc_omap_enable_poll && mmc_omap_cover_is_open(host)) + schedule_work(&host->switch_work); + } + +no_switch: + return 0; + +out: + /* FIXME: Free other resources too. */ + if (host) { + if (host->iclk && !IS_ERR(host->iclk)) + clk_put(host->iclk); + if (host->fclk && !IS_ERR(host->fclk)) + clk_put(host->fclk); + mmc_free_host(host->mmc); + } + return ret; +} + +static int mmc_omap_remove(struct platform_device *pdev) +{ + struct mmc_omap_host *host = platform_get_drvdata(pdev); + + platform_set_drvdata(pdev, NULL); + + if (host) { + mmc_remove_host(host->mmc); + free_irq(host->irq, host); + + if (host->power_pin >= 0) + omap_free_gpio(host->power_pin); + if (host->switch_pin >= 0) { + device_remove_file(&pdev->dev, &dev_attr_enable_poll); + device_remove_file(&pdev->dev, &dev_attr_cover_switch); + free_irq(OMAP_GPIO_IRQ(host->switch_pin), host); + omap_free_gpio(host->switch_pin); + host->switch_pin = -1; + del_timer_sync(&host->switch_timer); + flush_scheduled_work(); + } + if (host->iclk && !IS_ERR(host->iclk)) + clk_put(host->iclk); + if (host->fclk && !IS_ERR(host->fclk)) + clk_put(host->fclk); + mmc_free_host(host->mmc); + } + + release_mem_region(pdev->resource[0].start, + pdev->resource[0].end - pdev->resource[0].start + 1); + + return 0; +} + +#ifdef CONFIG_PM +static int mmc_omap_suspend(struct platform_device *pdev, pm_message_t mesg) +{ + int ret = 0; + struct mmc_omap_host *host = platform_get_drvdata(pdev); + + if (host && host->suspended) + return 0; + + if (host) { + ret = mmc_suspend_host(host->mmc, mesg); + if (ret == 0) + host->suspended = 1; + } + return ret; +} + +static int mmc_omap_resume(struct platform_device *pdev) +{ + int ret = 0; + struct mmc_omap_host *host = platform_get_drvdata(pdev); + + if (host && !host->suspended) + return 0; + + if (host) { + ret = mmc_resume_host(host->mmc); + if (ret == 0) + host->suspended = 0; + } + + return ret; +} +#else +#define mmc_omap_suspend NULL +#define mmc_omap_resume NULL +#endif + +static struct platform_driver mmc_omap_driver = { + .probe = mmc_omap_probe, + .remove = mmc_omap_remove, + .suspend = mmc_omap_suspend, + .resume = mmc_omap_resume, + .driver = { + .name = DRIVER_NAME, + }, +}; + +static int __init mmc_omap_init(void) +{ + return platform_driver_register(&mmc_omap_driver); +} + +static void __exit mmc_omap_exit(void) +{ + platform_driver_unregister(&mmc_omap_driver); +} + +module_init(mmc_omap_init); +module_exit(mmc_omap_exit); + +MODULE_DESCRIPTION("OMAP Multimedia Card driver"); +MODULE_LICENSE("GPL"); +MODULE_ALIAS(DRIVER_NAME); +MODULE_AUTHOR("Juha Yrjölä"); diff --git a/trunk/drivers/mmc/omap.h b/trunk/drivers/mmc/omap.h new file mode 100644 index 000000000000..c954d355a5e3 --- /dev/null +++ b/trunk/drivers/mmc/omap.h @@ -0,0 +1,55 @@ +#ifndef DRIVERS_MEDIA_MMC_OMAP_H +#define DRIVERS_MEDIA_MMC_OMAP_H + +#define OMAP_MMC_REG_CMD 0x00 +#define OMAP_MMC_REG_ARGL 0x04 +#define OMAP_MMC_REG_ARGH 0x08 +#define OMAP_MMC_REG_CON 0x0c +#define OMAP_MMC_REG_STAT 0x10 +#define OMAP_MMC_REG_IE 0x14 +#define OMAP_MMC_REG_CTO 0x18 +#define OMAP_MMC_REG_DTO 0x1c +#define OMAP_MMC_REG_DATA 0x20 +#define OMAP_MMC_REG_BLEN 0x24 +#define OMAP_MMC_REG_NBLK 0x28 +#define OMAP_MMC_REG_BUF 0x2c +#define OMAP_MMC_REG_SDIO 0x34 +#define OMAP_MMC_REG_REV 0x3c +#define OMAP_MMC_REG_RSP0 0x40 +#define OMAP_MMC_REG_RSP1 0x44 +#define OMAP_MMC_REG_RSP2 0x48 +#define OMAP_MMC_REG_RSP3 0x4c +#define OMAP_MMC_REG_RSP4 0x50 +#define OMAP_MMC_REG_RSP5 0x54 +#define OMAP_MMC_REG_RSP6 0x58 +#define OMAP_MMC_REG_RSP7 0x5c +#define OMAP_MMC_REG_IOSR 0x60 +#define OMAP_MMC_REG_SYSC 0x64 +#define OMAP_MMC_REG_SYSS 0x68 + +#define OMAP_MMC_STAT_CARD_ERR (1 << 14) +#define OMAP_MMC_STAT_CARD_IRQ (1 << 13) +#define OMAP_MMC_STAT_OCR_BUSY (1 << 12) +#define OMAP_MMC_STAT_A_EMPTY (1 << 11) +#define OMAP_MMC_STAT_A_FULL (1 << 10) +#define OMAP_MMC_STAT_CMD_CRC (1 << 8) +#define OMAP_MMC_STAT_CMD_TOUT (1 << 7) +#define OMAP_MMC_STAT_DATA_CRC (1 << 6) +#define OMAP_MMC_STAT_DATA_TOUT (1 << 5) +#define OMAP_MMC_STAT_END_BUSY (1 << 4) +#define OMAP_MMC_STAT_END_OF_DATA (1 << 3) +#define OMAP_MMC_STAT_CARD_BUSY (1 << 2) +#define OMAP_MMC_STAT_END_OF_CMD (1 << 0) + +#define OMAP_MMC_READ(base, reg) __raw_readw((base) + OMAP_MMC_REG_##reg) +#define OMAP_MMC_WRITE(base, reg, val) __raw_writew((val), (base) + OMAP_MMC_REG_##reg) + +/* + * Command types + */ +#define OMAP_MMC_CMDTYPE_BC 0 +#define OMAP_MMC_CMDTYPE_BCR 1 +#define OMAP_MMC_CMDTYPE_AC 2 +#define OMAP_MMC_CMDTYPE_ADTC 3 + +#endif diff --git a/trunk/drivers/mmc/pxamci.c b/trunk/drivers/mmc/pxamci.c index c32fad1ce51c..eb9a8826e9b5 100644 --- a/trunk/drivers/mmc/pxamci.c +++ b/trunk/drivers/mmc/pxamci.c @@ -37,12 +37,6 @@ #include "pxamci.h" -#ifdef CONFIG_MMC_DEBUG -#define DBG(x...) printk(KERN_DEBUG x) -#else -#define DBG(x...) do { } while (0) -#endif - #define DRIVER_NAME "pxa2xx-mci" #define NR_SG 1 @@ -206,7 +200,7 @@ static void pxamci_start_cmd(struct pxamci_host *host, struct mmc_command *cmd, static void pxamci_finish_request(struct pxamci_host *host, struct mmc_request *mrq) { - DBG("PXAMCI: request done\n"); + pr_debug("PXAMCI: request done\n"); host->mrq = NULL; host->cmd = NULL; host->data = NULL; @@ -252,7 +246,7 @@ static int pxamci_cmd_done(struct pxamci_host *host, unsigned int stat) if ((cmd->resp[0] & 0x80000000) == 0) cmd->error = MMC_ERR_BADCRC; } else { - DBG("ignoring CRC from command %d - *risky*\n",cmd->opcode); + pr_debug("ignoring CRC from command %d - *risky*\n",cmd->opcode); } #else cmd->error = MMC_ERR_BADCRC; @@ -317,12 +311,12 @@ static irqreturn_t pxamci_irq(int irq, void *devid, struct pt_regs *regs) ireg = readl(host->base + MMC_I_REG); - DBG("PXAMCI: irq %08x\n", ireg); + pr_debug("PXAMCI: irq %08x\n", ireg); if (ireg) { unsigned stat = readl(host->base + MMC_STAT); - DBG("PXAMCI: stat %08x\n", stat); + pr_debug("PXAMCI: stat %08x\n", stat); if (ireg & END_CMD_RES) handled |= pxamci_cmd_done(host, stat); @@ -376,9 +370,9 @@ static void pxamci_set_ios(struct mmc_host *mmc, struct mmc_ios *ios) { struct pxamci_host *host = mmc_priv(mmc); - DBG("pxamci_set_ios: clock %u power %u vdd %u.%02u\n", - ios->clock, ios->power_mode, ios->vdd / 100, - ios->vdd % 100); + pr_debug("pxamci_set_ios: clock %u power %u vdd %u.%02u\n", + ios->clock, ios->power_mode, ios->vdd / 100, + ios->vdd % 100); if (ios->clock) { unsigned int clk = CLOCKRATE / ios->clock; @@ -405,8 +399,8 @@ static void pxamci_set_ios(struct mmc_host *mmc, struct mmc_ios *ios) host->cmdat |= CMDAT_INIT; } - DBG("pxamci_set_ios: clkrt = %x cmdat = %x\n", - host->clkrt, host->cmdat); + pr_debug("pxamci_set_ios: clkrt = %x cmdat = %x\n", + host->clkrt, host->cmdat); } static struct mmc_host_ops pxamci_ops = { diff --git a/trunk/drivers/mmc/sdhci.c b/trunk/drivers/mmc/sdhci.c index 8b811d94371c..bdbfca050029 100644 --- a/trunk/drivers/mmc/sdhci.c +++ b/trunk/drivers/mmc/sdhci.c @@ -31,12 +31,8 @@ #define BUGMAIL "" -#ifdef CONFIG_MMC_DEBUG #define DBG(f, x...) \ - printk(KERN_DEBUG DRIVER_NAME " [%s()]: " f, __func__,## x) -#else -#define DBG(f, x...) do { } while (0) -#endif + pr_debug(DRIVER_NAME " [%s()]: " f, __func__,## x) static const struct pci_device_id pci_ids[] __devinitdata = { /* handle any SD host controller */ diff --git a/trunk/drivers/mmc/wbsd.c b/trunk/drivers/mmc/wbsd.c index 3be397d436fa..511f7b0b31d2 100644 --- a/trunk/drivers/mmc/wbsd.c +++ b/trunk/drivers/mmc/wbsd.c @@ -44,15 +44,10 @@ #define DRIVER_NAME "wbsd" #define DRIVER_VERSION "1.5" -#ifdef CONFIG_MMC_DEBUG #define DBG(x...) \ - printk(KERN_DEBUG DRIVER_NAME ": " x) + pr_debug(DRIVER_NAME ": " x) #define DBGF(f, x...) \ - printk(KERN_DEBUG DRIVER_NAME " [%s()]: " f, __func__ , ##x) -#else -#define DBG(x...) do { } while (0) -#define DBGF(x...) do { } while (0) -#endif + pr_debug(DRIVER_NAME " [%s()]: " f, __func__ , ##x) /* * Device resources diff --git a/trunk/drivers/mtd/chips/amd_flash.c b/trunk/drivers/mtd/chips/amd_flash.c index fdb91b6f1d97..57115618c496 100644 --- a/trunk/drivers/mtd/chips/amd_flash.c +++ b/trunk/drivers/mtd/chips/amd_flash.c @@ -664,7 +664,7 @@ static struct mtd_info *amd_flash_probe(struct map_info *map) printk("%s: Probing for AMD compatible flash...\n", map->name); if ((table_pos[0] = probe_new_chip(mtd, 0, NULL, &temp, table, - sizeof(table)/sizeof(table[0]))) + ARRAY_SIZE(table))) == -1) { printk(KERN_WARNING "%s: Found no AMD compatible device at location zero\n", @@ -696,7 +696,7 @@ static struct mtd_info *amd_flash_probe(struct map_info *map) base += (1 << temp.chipshift)) { int numchips = temp.numchips; table_pos[numchips] = probe_new_chip(mtd, base, chips, - &temp, table, sizeof(table)/sizeof(table[0])); + &temp, table, ARRAY_SIZE(table)); } mtd->eraseregions = kmalloc(sizeof(struct mtd_erase_region_info) * diff --git a/trunk/drivers/mtd/chips/jedec_probe.c b/trunk/drivers/mtd/chips/jedec_probe.c index edb306c03c0a..517ea33e7260 100644 --- a/trunk/drivers/mtd/chips/jedec_probe.c +++ b/trunk/drivers/mtd/chips/jedec_probe.c @@ -34,6 +34,7 @@ #define MANUFACTURER_MACRONIX 0x00C2 #define MANUFACTURER_NEC 0x0010 #define MANUFACTURER_PMC 0x009D +#define MANUFACTURER_SHARP 0x00b0 #define MANUFACTURER_SST 0x00BF #define MANUFACTURER_ST 0x0020 #define MANUFACTURER_TOSHIBA 0x0098 @@ -124,6 +125,9 @@ #define PM49FL004 0x006E #define PM49FL008 0x006A +/* Sharp */ +#define LH28F640BF 0x00b0 + /* ST - www.st.com */ #define M29W800DT 0x00D7 #define M29W800DB 0x005B @@ -1267,6 +1271,19 @@ static const struct amd_flash_info jedec_table[] = { .regions = { ERASEINFO( 0x01000, 256 ) } + }, { + .mfr_id = MANUFACTURER_SHARP, + .dev_id = LH28F640BF, + .name = "LH28F640BF", + .uaddr = { + [0] = MTD_UADDR_UNNECESSARY, /* x8 */ + }, + .DevSize = SIZE_4MiB, + .CmdSet = P_ID_INTEL_STD, + .NumEraseRegions= 1, + .regions = { + ERASEINFO(0x40000,16), + } }, { .mfr_id = MANUFACTURER_SST, .dev_id = SST39LF512, @@ -2035,7 +2052,7 @@ static int jedec_probe_chip(struct map_info *map, __u32 base, DEBUG(MTD_DEBUG_LEVEL3, "Search for id:(%02x %02x) interleave(%d) type(%d)\n", cfi->mfr, cfi->id, cfi_interleave(cfi), cfi->device_type); - for (i=0; i\n"); diff --git a/trunk/drivers/mtd/cmdlinepart.c b/trunk/drivers/mtd/cmdlinepart.c index 6b8bb2e4dcfd..a7a7bfe33879 100644 --- a/trunk/drivers/mtd/cmdlinepart.c +++ b/trunk/drivers/mtd/cmdlinepart.c @@ -42,7 +42,8 @@ /* special size referring to all the remaining space in a partition */ -#define SIZE_REMAINING 0xffffffff +#define SIZE_REMAINING UINT_MAX +#define OFFSET_CONTINUOUS UINT_MAX struct cmdline_mtd_partition { struct cmdline_mtd_partition *next; @@ -75,7 +76,7 @@ static struct mtd_partition * newpart(char *s, { struct mtd_partition *parts; unsigned long size; - unsigned long offset = 0; + unsigned long offset = OFFSET_CONTINUOUS; char *name; int name_len; unsigned char *extra_mem; @@ -314,7 +315,7 @@ static int parse_cmdline_partitions(struct mtd_info *master, { for(i = 0, offset = 0; i < part->num_parts; i++) { - if (!part->parts[i].offset) + if (part->parts[i].offset == OFFSET_CONTINUOUS) part->parts[i].offset = offset; else offset = part->parts[i].offset; diff --git a/trunk/drivers/mtd/devices/blkmtd.c b/trunk/drivers/mtd/devices/blkmtd.c index 04f864d238db..79f2e1f23ebd 100644 --- a/trunk/drivers/mtd/devices/blkmtd.c +++ b/trunk/drivers/mtd/devices/blkmtd.c @@ -28,8 +28,9 @@ #include #include #include +#include #include - +#include #define err(format, arg...) printk(KERN_ERR "blkmtd: " format "\n" , ## arg) #define info(format, arg...) printk(KERN_INFO "blkmtd: " format "\n" , ## arg) @@ -46,7 +47,7 @@ struct blkmtd_dev { struct list_head list; struct block_device *blkdev; struct mtd_info mtd_info; - struct semaphore wrbuf_mutex; + struct mutex wrbuf_mutex; }; @@ -268,7 +269,7 @@ static int write_pages(struct blkmtd_dev *dev, const u_char *buf, loff_t to, if(end_len) pagecnt++; - down(&dev->wrbuf_mutex); + mutex_lock(&dev->wrbuf_mutex); DEBUG(3, "blkmtd: write: start_len = %zd len = %zd end_len = %zd pagecnt = %d\n", start_len, len, end_len, pagecnt); @@ -376,7 +377,7 @@ static int write_pages(struct blkmtd_dev *dev, const u_char *buf, loff_t to, blkmtd_write_out(bio); DEBUG(2, "blkmtd: write: end, retlen = %zd, err = %d\n", *retlen, err); - up(&dev->wrbuf_mutex); + mutex_unlock(&dev->wrbuf_mutex); if(retlen) *retlen = thislen; @@ -614,8 +615,6 @@ static struct mtd_erase_region_info *calc_erase_regions( } -extern dev_t __init name_to_dev_t(const char *line); - static struct blkmtd_dev *add_device(char *devname, int readonly, int erase_size) { struct block_device *bdev; @@ -659,7 +658,7 @@ static struct blkmtd_dev *add_device(char *devname, int readonly, int erase_size memset(dev, 0, sizeof(struct blkmtd_dev)); dev->blkdev = bdev; if(!readonly) { - init_MUTEX(&dev->wrbuf_mutex); + mutex_init(&dev->wrbuf_mutex); } dev->mtd_info.size = dev->blkdev->bd_inode->i_size & PAGE_MASK; diff --git a/trunk/drivers/mtd/devices/block2mtd.c b/trunk/drivers/mtd/devices/block2mtd.c index 7ff403b2a0a0..4160b8334c53 100644 --- a/trunk/drivers/mtd/devices/block2mtd.c +++ b/trunk/drivers/mtd/devices/block2mtd.c @@ -18,6 +18,7 @@ #include #include #include +#include #define VERSION "$Revision: 1.30 $" @@ -31,7 +32,7 @@ struct block2mtd_dev { struct list_head list; struct block_device *blkdev; struct mtd_info mtd; - struct semaphore write_mutex; + struct mutex write_mutex; }; @@ -134,9 +135,9 @@ static int block2mtd_erase(struct mtd_info *mtd, struct erase_info *instr) int err; instr->state = MTD_ERASING; - down(&dev->write_mutex); + mutex_lock(&dev->write_mutex); err = _block2mtd_erase(dev, from, len); - up(&dev->write_mutex); + mutex_unlock(&dev->write_mutex); if (err) { ERROR("erase failed err = %d", err); instr->state = MTD_ERASE_FAILED; @@ -249,9 +250,9 @@ static int block2mtd_write(struct mtd_info *mtd, loff_t to, size_t len, if (to + len > mtd->size) len = mtd->size - to; - down(&dev->write_mutex); + mutex_lock(&dev->write_mutex); err = _block2mtd_write(dev, buf, to, len, retlen); - up(&dev->write_mutex); + mutex_unlock(&dev->write_mutex); if (err > 0) err = 0; return err; @@ -310,7 +311,7 @@ static struct block2mtd_dev *add_device(char *devname, int erase_size) goto devinit_err; } - init_MUTEX(&dev->write_mutex); + mutex_init(&dev->write_mutex); /* Setup the MTD structure */ /* make the name contain the block device in */ diff --git a/trunk/drivers/mtd/devices/doc2000.c b/trunk/drivers/mtd/devices/doc2000.c index e4345cf744a2..23e7a5c7d2c1 100644 --- a/trunk/drivers/mtd/devices/doc2000.c +++ b/trunk/drivers/mtd/devices/doc2000.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include @@ -605,7 +606,7 @@ static void DoC2k_init(struct mtd_info *mtd) this->curfloor = -1; this->curchip = -1; - init_MUTEX(&this->lock); + mutex_init(&this->lock); /* Ident all the chips present. */ DoC_ScanChips(this, maxchips); @@ -645,7 +646,7 @@ static int doc_read_ecc(struct mtd_info *mtd, loff_t from, size_t len, if (from >= this->totlen) return -EINVAL; - down(&this->lock); + mutex_lock(&this->lock); *retlen = 0; while (left) { @@ -774,7 +775,7 @@ static int doc_read_ecc(struct mtd_info *mtd, loff_t from, size_t len, buf += len; } - up(&this->lock); + mutex_unlock(&this->lock); return ret; } @@ -803,7 +804,7 @@ static int doc_write_ecc(struct mtd_info *mtd, loff_t to, size_t len, if (to >= this->totlen) return -EINVAL; - down(&this->lock); + mutex_lock(&this->lock); *retlen = 0; while (left) { @@ -873,7 +874,7 @@ static int doc_write_ecc(struct mtd_info *mtd, loff_t to, size_t len, printk(KERN_ERR "Error programming flash\n"); /* Error in programming */ *retlen = 0; - up(&this->lock); + mutex_unlock(&this->lock); return -EIO; } @@ -935,7 +936,7 @@ static int doc_write_ecc(struct mtd_info *mtd, loff_t to, size_t len, printk(KERN_ERR "Error programming flash\n"); /* Error in programming */ *retlen = 0; - up(&this->lock); + mutex_unlock(&this->lock); return -EIO; } @@ -956,7 +957,7 @@ static int doc_write_ecc(struct mtd_info *mtd, loff_t to, size_t len, ret = doc_write_oob_nolock(mtd, to, 8, &dummy, x); if (ret) { - up(&this->lock); + mutex_unlock(&this->lock); return ret; } } @@ -966,7 +967,7 @@ static int doc_write_ecc(struct mtd_info *mtd, loff_t to, size_t len, buf += len; } - up(&this->lock); + mutex_unlock(&this->lock); return 0; } @@ -975,13 +976,13 @@ static int doc_writev_ecc(struct mtd_info *mtd, const struct kvec *vecs, u_char *eccbuf, struct nand_oobinfo *oobsel) { static char static_buf[512]; - static DECLARE_MUTEX(writev_buf_sem); + static DEFINE_MUTEX(writev_buf_mutex); size_t totretlen = 0; size_t thisvecofs = 0; int ret= 0; - down(&writev_buf_sem); + mutex_lock(&writev_buf_mutex); while(count) { size_t thislen, thisretlen; @@ -1024,7 +1025,7 @@ static int doc_writev_ecc(struct mtd_info *mtd, const struct kvec *vecs, to += thislen; } - up(&writev_buf_sem); + mutex_unlock(&writev_buf_mutex); *retlen = totretlen; return ret; } @@ -1037,7 +1038,7 @@ static int doc_read_oob(struct mtd_info *mtd, loff_t ofs, size_t len, int len256 = 0, ret; struct Nand *mychip; - down(&this->lock); + mutex_lock(&this->lock); mychip = &this->chips[ofs >> this->chipshift]; @@ -1083,7 +1084,7 @@ static int doc_read_oob(struct mtd_info *mtd, loff_t ofs, size_t len, ret = DoC_WaitReady(this); - up(&this->lock); + mutex_unlock(&this->lock); return ret; } @@ -1197,10 +1198,10 @@ static int doc_write_oob(struct mtd_info *mtd, loff_t ofs, size_t len, struct DiskOnChip *this = mtd->priv; int ret; - down(&this->lock); + mutex_lock(&this->lock); ret = doc_write_oob_nolock(mtd, ofs, len, retlen, buf); - up(&this->lock); + mutex_unlock(&this->lock); return ret; } @@ -1214,10 +1215,10 @@ static int doc_erase(struct mtd_info *mtd, struct erase_info *instr) struct Nand *mychip; int status; - down(&this->lock); + mutex_lock(&this->lock); if (ofs & (mtd->erasesize-1) || len & (mtd->erasesize-1)) { - up(&this->lock); + mutex_unlock(&this->lock); return -EINVAL; } @@ -1265,7 +1266,7 @@ static int doc_erase(struct mtd_info *mtd, struct erase_info *instr) callback: mtd_erase_callback(instr); - up(&this->lock); + mutex_unlock(&this->lock); return 0; } diff --git a/trunk/drivers/mtd/devices/lart.c b/trunk/drivers/mtd/devices/lart.c index 1e876fcb0408..29b0ddaa324e 100644 --- a/trunk/drivers/mtd/devices/lart.c +++ b/trunk/drivers/mtd/devices/lart.c @@ -581,8 +581,6 @@ static int flash_write (struct mtd_info *mtd,loff_t to,size_t len,size_t *retlen /***************************************************************************************************/ -#define NB_OF(x) (sizeof (x) / sizeof (x[0])) - static struct mtd_info mtd; static struct mtd_erase_region_info erase_regions[] = { @@ -640,7 +638,7 @@ int __init lart_flash_init (void) mtd.flags = MTD_CAP_NORFLASH; mtd.size = FLASH_BLOCKSIZE_PARAM * FLASH_NUMBLOCKS_16m_PARAM + FLASH_BLOCKSIZE_MAIN * FLASH_NUMBLOCKS_16m_MAIN; mtd.erasesize = FLASH_BLOCKSIZE_MAIN; - mtd.numeraseregions = NB_OF (erase_regions); + mtd.numeraseregions = ARRAY_SIZE(erase_regions); mtd.eraseregions = erase_regions; mtd.erase = flash_erase; mtd.read = flash_read; @@ -670,9 +668,9 @@ int __init lart_flash_init (void) result,mtd.eraseregions[result].numblocks); #ifdef HAVE_PARTITIONS - printk ("\npartitions = %d\n",NB_OF (lart_partitions)); + printk ("\npartitions = %d\n", ARRAY_SIZE(lart_partitions)); - for (result = 0; result < NB_OF (lart_partitions); result++) + for (result = 0; result < ARRAY_SIZE(lart_partitions); result++) printk (KERN_DEBUG "\n\n" "lart_partitions[%d].name = %s\n" @@ -687,7 +685,7 @@ int __init lart_flash_init (void) #ifndef HAVE_PARTITIONS result = add_mtd_device (&mtd); #else - result = add_mtd_partitions (&mtd,lart_partitions,NB_OF (lart_partitions)); + result = add_mtd_partitions (&mtd,lart_partitions, ARRAY_SIZE(lart_partitions)); #endif return (result); diff --git a/trunk/drivers/mtd/devices/m25p80.c b/trunk/drivers/mtd/devices/m25p80.c index d5f24089be71..04e65d5dae00 100644 --- a/trunk/drivers/mtd/devices/m25p80.c +++ b/trunk/drivers/mtd/devices/m25p80.c @@ -186,7 +186,7 @@ static int m25p80_erase(struct mtd_info *mtd, struct erase_info *instr) struct m25p *flash = mtd_to_m25p(mtd); u32 addr,len; - DEBUG(MTD_DEBUG_LEVEL2, "%s: %s %s 0x%08x, len %zd\n", + DEBUG(MTD_DEBUG_LEVEL2, "%s: %s %s 0x%08x, len %d\n", flash->spi->dev.bus_id, __FUNCTION__, "at", (u32)instr->addr, instr->len); diff --git a/trunk/drivers/mtd/devices/ms02-nv.c b/trunk/drivers/mtd/devices/ms02-nv.c index 0ff2e4378244..485f663493d2 100644 --- a/trunk/drivers/mtd/devices/ms02-nv.c +++ b/trunk/drivers/mtd/devices/ms02-nv.c @@ -308,7 +308,7 @@ static int __init ms02nv_init(void) break; } - for (i = 0; i < (sizeof(ms02nv_addrs) / sizeof(*ms02nv_addrs)); i++) + for (i = 0; i < ARRAY_SIZE(ms02nv_addrs); i++) if (!ms02nv_init_one(ms02nv_addrs[i] << stride)) count++; diff --git a/trunk/drivers/mtd/inftlcore.c b/trunk/drivers/mtd/inftlcore.c index 8a544890173d..a3b92479719d 100644 --- a/trunk/drivers/mtd/inftlcore.c +++ b/trunk/drivers/mtd/inftlcore.c @@ -47,9 +47,6 @@ */ #define MAX_LOOPS 10000 -extern void INFTL_dumptables(struct INFTLrecord *inftl); -extern void INFTL_dumpVUchains(struct INFTLrecord *inftl); - static void inftl_add_mtd(struct mtd_blktrans_ops *tr, struct mtd_info *mtd) { struct INFTLrecord *inftl; @@ -132,7 +129,7 @@ static void inftl_add_mtd(struct mtd_blktrans_ops *tr, struct mtd_info *mtd) return; } #ifdef PSYCHO_DEBUG - printk(KERN_INFO "INFTL: Found new nftl%c\n", nftl->mbd.devnum + 'a'); + printk(KERN_INFO "INFTL: Found new inftl%c\n", inftl->mbd.devnum + 'a'); #endif return; } @@ -885,8 +882,6 @@ static struct mtd_blktrans_ops inftl_tr = { .owner = THIS_MODULE, }; -extern char inftlmountrev[]; - static int __init init_inftl(void) { printk(KERN_INFO "INFTL: inftlcore.c $Revision: 1.19 $, " diff --git a/trunk/drivers/mtd/maps/alchemy-flash.c b/trunk/drivers/mtd/maps/alchemy-flash.c index a57791a6ce40..b933a2a27b18 100644 --- a/trunk/drivers/mtd/maps/alchemy-flash.c +++ b/trunk/drivers/mtd/maps/alchemy-flash.c @@ -126,8 +126,6 @@ static struct mtd_partition alchemy_partitions[] = { } }; -#define NB_OF(x) (sizeof(x)/sizeof(x[0])) - static struct mtd_info *mymtd; int __init alchemy_mtd_init(void) @@ -154,7 +152,7 @@ int __init alchemy_mtd_init(void) * Static partition definition selection */ parts = alchemy_partitions; - nb_parts = NB_OF(alchemy_partitions); + nb_parts = ARRAY_SIZE(alchemy_partitions); alchemy_map.size = window_size; /* diff --git a/trunk/drivers/mtd/maps/cfi_flagadm.c b/trunk/drivers/mtd/maps/cfi_flagadm.c index 6a8c0415bde8..fd0f0d3187de 100644 --- a/trunk/drivers/mtd/maps/cfi_flagadm.c +++ b/trunk/drivers/mtd/maps/cfi_flagadm.c @@ -86,7 +86,7 @@ struct mtd_partition flagadm_parts[] = { } }; -#define PARTITION_COUNT (sizeof(flagadm_parts)/sizeof(struct mtd_partition)) +#define PARTITION_COUNT ARRAY_SIZE(flagadm_parts) static struct mtd_info *mymtd; diff --git a/trunk/drivers/mtd/maps/dbox2-flash.c b/trunk/drivers/mtd/maps/dbox2-flash.c index 49d90542fc75..652813cd6c2d 100644 --- a/trunk/drivers/mtd/maps/dbox2-flash.c +++ b/trunk/drivers/mtd/maps/dbox2-flash.c @@ -57,7 +57,7 @@ static struct mtd_partition partition_info[]= { } }; -#define NUM_PARTITIONS (sizeof(partition_info) / sizeof(partition_info[0])) +#define NUM_PARTITIONS ARRAY_SIZE(partition_info) #define WINDOW_ADDR 0x10000000 #define WINDOW_SIZE 0x800000 diff --git a/trunk/drivers/mtd/maps/dilnetpc.c b/trunk/drivers/mtd/maps/dilnetpc.c index efb221692641..c299d10b33e6 100644 --- a/trunk/drivers/mtd/maps/dilnetpc.c +++ b/trunk/drivers/mtd/maps/dilnetpc.c @@ -300,7 +300,7 @@ static struct mtd_partition partition_info[]= }, }; -#define NUM_PARTITIONS (sizeof(partition_info)/sizeof(partition_info[0])) +#define NUM_PARTITIONS ARRAY_SIZE(partition_info) static struct mtd_info *mymtd; static struct mtd_info *lowlvl_parts[NUM_PARTITIONS]; @@ -345,7 +345,7 @@ static struct mtd_partition higlvl_partition_info[]= }, }; -#define NUM_HIGHLVL_PARTITIONS (sizeof(higlvl_partition_info)/sizeof(partition_info[0])) +#define NUM_HIGHLVL_PARTITIONS ARRAY_SIZE(higlvl_partition_info) static int dnp_adnp_probe(void) diff --git a/trunk/drivers/mtd/maps/dmv182.c b/trunk/drivers/mtd/maps/dmv182.c index b993ac01a9a5..2bb3c0f0f970 100644 --- a/trunk/drivers/mtd/maps/dmv182.c +++ b/trunk/drivers/mtd/maps/dmv182.c @@ -99,7 +99,7 @@ static struct mtd_info *this_mtd; static int __init init_svme182(void) { struct mtd_partition *partitions; - int num_parts = sizeof(svme182_partitions) / sizeof(struct mtd_partition); + int num_parts = ARRAY_SIZE(svme182_partitions); partitions = svme182_partitions; diff --git a/trunk/drivers/mtd/maps/h720x-flash.c b/trunk/drivers/mtd/maps/h720x-flash.c index 319094821101..0667101ccbe1 100644 --- a/trunk/drivers/mtd/maps/h720x-flash.c +++ b/trunk/drivers/mtd/maps/h720x-flash.c @@ -59,7 +59,7 @@ static struct mtd_partition h720x_partitions[] = { } }; -#define NUM_PARTITIONS (sizeof(h720x_partitions)/sizeof(h720x_partitions[0])) +#define NUM_PARTITIONS ARRAY_SIZE(h720x_partitions) static int nr_mtd_parts; static struct mtd_partition *mtd_parts; diff --git a/trunk/drivers/mtd/maps/netsc520.c b/trunk/drivers/mtd/maps/netsc520.c index 33060a315722..ed215470158b 100644 --- a/trunk/drivers/mtd/maps/netsc520.c +++ b/trunk/drivers/mtd/maps/netsc520.c @@ -76,7 +76,7 @@ static struct mtd_partition partition_info[]={ .size = 0x80000 }, }; -#define NUM_PARTITIONS (sizeof(partition_info)/sizeof(partition_info[0])) +#define NUM_PARTITIONS ARRAY_SIZE(partition_info) #define WINDOW_SIZE 0x00100000 #define WINDOW_ADDR 0x00200000 @@ -88,7 +88,7 @@ static struct map_info netsc520_map = { .phys = WINDOW_ADDR, }; -#define NUM_FLASH_BANKS (sizeof(netsc520_map)/sizeof(struct map_info)) +#define NUM_FLASH_BANKS ARRAY_SIZE(netsc520_map) static struct mtd_info *mymtd; diff --git a/trunk/drivers/mtd/maps/nettel.c b/trunk/drivers/mtd/maps/nettel.c index 632eb2aa968f..54a3102ab19a 100644 --- a/trunk/drivers/mtd/maps/nettel.c +++ b/trunk/drivers/mtd/maps/nettel.c @@ -128,8 +128,7 @@ static struct mtd_partition nettel_amd_partitions[] = { } }; -#define NUM_AMD_PARTITIONS \ - (sizeof(nettel_amd_partitions)/sizeof(nettel_amd_partitions[0])) +#define NUM_AMD_PARTITIONS ARRAY_SIZE(nettel_amd_partitions) /****************************************************************************/ diff --git a/trunk/drivers/mtd/maps/ocotea.c b/trunk/drivers/mtd/maps/ocotea.c index c223514ca2eb..a21fcd195ab4 100644 --- a/trunk/drivers/mtd/maps/ocotea.c +++ b/trunk/drivers/mtd/maps/ocotea.c @@ -58,8 +58,6 @@ static struct mtd_partition ocotea_large_partitions[] = { } }; -#define NB_OF(x) (sizeof(x)/sizeof(x[0])) - int __init init_ocotea(void) { u8 fpga0_reg; @@ -97,7 +95,7 @@ int __init init_ocotea(void) if (flash) { flash->owner = THIS_MODULE; add_mtd_partitions(flash, ocotea_small_partitions, - NB_OF(ocotea_small_partitions)); + ARRAY_SIZE(ocotea_small_partitions)); } else { printk("map probe failed for flash\n"); return -ENXIO; @@ -118,7 +116,7 @@ int __init init_ocotea(void) if (flash) { flash->owner = THIS_MODULE; add_mtd_partitions(flash, ocotea_large_partitions, - NB_OF(ocotea_large_partitions)); + ARRAY_SIZE(ocotea_large_partitions)); } else { printk("map probe failed for flash\n"); return -ENXIO; diff --git a/trunk/drivers/mtd/maps/pci.c b/trunk/drivers/mtd/maps/pci.c index 21822c2edbe4..d2ab1bae9c34 100644 --- a/trunk/drivers/mtd/maps/pci.c +++ b/trunk/drivers/mtd/maps/pci.c @@ -334,9 +334,6 @@ mtd_pci_probe(struct pci_dev *dev, const struct pci_device_id *id) return 0; release: - if (mtd) - map_destroy(mtd); - if (map) { map->exit(dev, map); kfree(map); diff --git a/trunk/drivers/mtd/maps/pcmciamtd.c b/trunk/drivers/mtd/maps/pcmciamtd.c index a01e04bf484a..d27f4129afd3 100644 --- a/trunk/drivers/mtd/maps/pcmciamtd.c +++ b/trunk/drivers/mtd/maps/pcmciamtd.c @@ -610,7 +610,7 @@ static int pcmciamtd_config(struct pcmcia_device *link) } else if(mem_type == 2) { mtd = do_map_probe("map_rom", &dev->pcmcia_map); } else { - for(i = 0; i < sizeof(probes) / sizeof(char *); i++) { + for(i = 0; i < ARRAY_SIZE(probes); i++) { DEBUG(1, "Trying %s", probes[i]); mtd = do_map_probe(probes[i], &dev->pcmcia_map); if(mtd) diff --git a/trunk/drivers/mtd/maps/redwood.c b/trunk/drivers/mtd/maps/redwood.c index 5b76ed886185..50b14033613f 100644 --- a/trunk/drivers/mtd/maps/redwood.c +++ b/trunk/drivers/mtd/maps/redwood.c @@ -121,8 +121,7 @@ struct map_info redwood_flash_map = { }; -#define NUM_REDWOOD_FLASH_PARTITIONS \ - (sizeof(redwood_flash_partitions)/sizeof(redwood_flash_partitions[0])) +#define NUM_REDWOOD_FLASH_PARTITIONS ARRAY_SIZE(redwood_flash_partitions) static struct mtd_info *redwood_mtd; diff --git a/trunk/drivers/mtd/maps/sbc8240.c b/trunk/drivers/mtd/maps/sbc8240.c index 225cdd9ba5b2..350286dc1d2e 100644 --- a/trunk/drivers/mtd/maps/sbc8240.c +++ b/trunk/drivers/mtd/maps/sbc8240.c @@ -66,7 +66,7 @@ static struct map_info sbc8240_map[2] = { } }; -#define NUM_FLASH_BANKS (sizeof(sbc8240_map) / sizeof(struct map_info)) +#define NUM_FLASH_BANKS ARRAY_SIZE(sbc8240_map) /* * The following defines the partition layout of SBC8240 boards. @@ -125,8 +125,6 @@ static struct mtd_partition sbc8240_fs_partitions [] = { } }; -#define NB_OF(x) (sizeof (x) / sizeof (x[0])) - /* trivial struct to describe partition information */ struct mtd_part_def { @@ -190,10 +188,10 @@ int __init init_sbc8240_mtd (void) #ifdef CONFIG_MTD_PARTITIONS sbc8240_part_banks[0].mtd_part = sbc8240_uboot_partitions; sbc8240_part_banks[0].type = "static image"; - sbc8240_part_banks[0].nums = NB_OF(sbc8240_uboot_partitions); + sbc8240_part_banks[0].nums = ARRAY_SIZE(sbc8240_uboot_partitions); sbc8240_part_banks[1].mtd_part = sbc8240_fs_partitions; sbc8240_part_banks[1].type = "static file system"; - sbc8240_part_banks[1].nums = NB_OF(sbc8240_fs_partitions); + sbc8240_part_banks[1].nums = ARRAY_SIZE(sbc8240_fs_partitions); for (i = 0; i < NUM_FLASH_BANKS; i++) { diff --git a/trunk/drivers/mtd/maps/sc520cdp.c b/trunk/drivers/mtd/maps/sc520cdp.c index ed92afadd8a9..e8c130e1efd3 100644 --- a/trunk/drivers/mtd/maps/sc520cdp.c +++ b/trunk/drivers/mtd/maps/sc520cdp.c @@ -107,7 +107,7 @@ static struct map_info sc520cdp_map[] = { }, }; -#define NUM_FLASH_BANKS (sizeof(sc520cdp_map)/sizeof(struct map_info)) +#define NUM_FLASH_BANKS ARRAY_SIZE(sc520cdp_map) static struct mtd_info *mymtd[NUM_FLASH_BANKS]; static struct mtd_info *merged_mtd; diff --git a/trunk/drivers/mtd/maps/scx200_docflash.c b/trunk/drivers/mtd/maps/scx200_docflash.c index 2c91dff8bb60..28b8a571a91a 100644 --- a/trunk/drivers/mtd/maps/scx200_docflash.c +++ b/trunk/drivers/mtd/maps/scx200_docflash.c @@ -70,7 +70,7 @@ static struct mtd_partition partition_info[] = { .size = 0x80000 }, }; -#define NUM_PARTITIONS (sizeof(partition_info)/sizeof(partition_info[0])) +#define NUM_PARTITIONS ARRAY_SIZE(partition_info) #endif diff --git a/trunk/drivers/mtd/maps/sharpsl-flash.c b/trunk/drivers/mtd/maps/sharpsl-flash.c index 999f4bb3d845..12fe53c0d2fc 100644 --- a/trunk/drivers/mtd/maps/sharpsl-flash.c +++ b/trunk/drivers/mtd/maps/sharpsl-flash.c @@ -49,8 +49,6 @@ static struct mtd_partition sharpsl_partitions[1] = { } }; -#define NB_OF(x) (sizeof(x)/sizeof(x[0])) - int __init init_sharpsl(void) { struct mtd_partition *parts; @@ -92,7 +90,7 @@ int __init init_sharpsl(void) } parts = sharpsl_partitions; - nb_parts = NB_OF(sharpsl_partitions); + nb_parts = ARRAY_SIZE(sharpsl_partitions); printk(KERN_NOTICE "Using %s partision definition\n", part_type); add_mtd_partitions(mymtd, parts, nb_parts); diff --git a/trunk/drivers/mtd/maps/ts5500_flash.c b/trunk/drivers/mtd/maps/ts5500_flash.c index 4b372bcb17f1..a7422c200567 100644 --- a/trunk/drivers/mtd/maps/ts5500_flash.c +++ b/trunk/drivers/mtd/maps/ts5500_flash.c @@ -64,7 +64,7 @@ static struct mtd_partition ts5500_partitions[] = { } }; -#define NUM_PARTITIONS (sizeof(ts5500_partitions)/sizeof(struct mtd_partition)) +#define NUM_PARTITIONS ARRAY_SIZE(ts5500_partitions) static struct mtd_info *mymtd; diff --git a/trunk/drivers/mtd/maps/uclinux.c b/trunk/drivers/mtd/maps/uclinux.c index 79d92808b766..f7264dc2ac9b 100644 --- a/trunk/drivers/mtd/maps/uclinux.c +++ b/trunk/drivers/mtd/maps/uclinux.c @@ -37,7 +37,7 @@ struct mtd_partition uclinux_romfs[] = { { .name = "ROMfs" } }; -#define NUM_PARTITIONS (sizeof(uclinux_romfs) / sizeof(uclinux_romfs[0])) +#define NUM_PARTITIONS ARRAY_SIZE(uclinux_romfs) /****************************************************************************/ diff --git a/trunk/drivers/mtd/maps/vmax301.c b/trunk/drivers/mtd/maps/vmax301.c index e0063941c0df..b3e487395435 100644 --- a/trunk/drivers/mtd/maps/vmax301.c +++ b/trunk/drivers/mtd/maps/vmax301.c @@ -182,7 +182,7 @@ int __init init_vmax301(void) } } - if (!vmax_mtd[1] && !vmax_mtd[2]) { + if (!vmax_mtd[0] && !vmax_mtd[1]) { iounmap((void *)iomapadr); return -ENXIO; } diff --git a/trunk/drivers/mtd/mtd_blkdevs.c b/trunk/drivers/mtd/mtd_blkdevs.c index 840dd66ce2dc..458d3c8ae1ee 100644 --- a/trunk/drivers/mtd/mtd_blkdevs.c +++ b/trunk/drivers/mtd/mtd_blkdevs.c @@ -19,12 +19,12 @@ #include #include #include -#include +#include #include static LIST_HEAD(blktrans_majors); -extern struct semaphore mtd_table_mutex; +extern struct mutex mtd_table_mutex; extern struct mtd_info *mtd_table[]; struct mtd_blkcore_priv { @@ -122,9 +122,9 @@ static int mtd_blktrans_thread(void *arg) spin_unlock_irq(rq->queue_lock); - down(&dev->sem); + mutex_lock(&dev->lock); res = do_blktrans_request(tr, dev, req); - up(&dev->sem); + mutex_unlock(&dev->lock); spin_lock_irq(rq->queue_lock); @@ -235,8 +235,8 @@ int add_mtd_blktrans_dev(struct mtd_blktrans_dev *new) int last_devnum = -1; struct gendisk *gd; - if (!down_trylock(&mtd_table_mutex)) { - up(&mtd_table_mutex); + if (!!mutex_trylock(&mtd_table_mutex)) { + mutex_unlock(&mtd_table_mutex); BUG(); } @@ -267,7 +267,7 @@ int add_mtd_blktrans_dev(struct mtd_blktrans_dev *new) return -EBUSY; } - init_MUTEX(&new->sem); + mutex_init(&new->lock); list_add_tail(&new->list, &tr->devs); added: if (!tr->writesect) @@ -313,8 +313,8 @@ int add_mtd_blktrans_dev(struct mtd_blktrans_dev *new) int del_mtd_blktrans_dev(struct mtd_blktrans_dev *old) { - if (!down_trylock(&mtd_table_mutex)) { - up(&mtd_table_mutex); + if (!!mutex_trylock(&mtd_table_mutex)) { + mutex_unlock(&mtd_table_mutex); BUG(); } @@ -378,14 +378,14 @@ int register_mtd_blktrans(struct mtd_blktrans_ops *tr) memset(tr->blkcore_priv, 0, sizeof(*tr->blkcore_priv)); - down(&mtd_table_mutex); + mutex_lock(&mtd_table_mutex); ret = register_blkdev(tr->major, tr->name); if (ret) { printk(KERN_WARNING "Unable to register %s block device on major %d: %d\n", tr->name, tr->major, ret); kfree(tr->blkcore_priv); - up(&mtd_table_mutex); + mutex_unlock(&mtd_table_mutex); return ret; } spin_lock_init(&tr->blkcore_priv->queue_lock); @@ -396,7 +396,7 @@ int register_mtd_blktrans(struct mtd_blktrans_ops *tr) if (!tr->blkcore_priv->rq) { unregister_blkdev(tr->major, tr->name); kfree(tr->blkcore_priv); - up(&mtd_table_mutex); + mutex_unlock(&mtd_table_mutex); return -ENOMEM; } @@ -407,7 +407,7 @@ int register_mtd_blktrans(struct mtd_blktrans_ops *tr) blk_cleanup_queue(tr->blkcore_priv->rq); unregister_blkdev(tr->major, tr->name); kfree(tr->blkcore_priv); - up(&mtd_table_mutex); + mutex_unlock(&mtd_table_mutex); return ret; } @@ -419,7 +419,7 @@ int register_mtd_blktrans(struct mtd_blktrans_ops *tr) tr->add_mtd(tr, mtd_table[i]); } - up(&mtd_table_mutex); + mutex_unlock(&mtd_table_mutex); return 0; } @@ -428,7 +428,7 @@ int deregister_mtd_blktrans(struct mtd_blktrans_ops *tr) { struct list_head *this, *next; - down(&mtd_table_mutex); + mutex_lock(&mtd_table_mutex); /* Clean up the kernel thread */ tr->blkcore_priv->exiting = 1; @@ -446,7 +446,7 @@ int deregister_mtd_blktrans(struct mtd_blktrans_ops *tr) blk_cleanup_queue(tr->blkcore_priv->rq); unregister_blkdev(tr->major, tr->name); - up(&mtd_table_mutex); + mutex_unlock(&mtd_table_mutex); kfree(tr->blkcore_priv); diff --git a/trunk/drivers/mtd/mtdblock.c b/trunk/drivers/mtd/mtdblock.c index e84756644fd1..2cef280e388c 100644 --- a/trunk/drivers/mtd/mtdblock.c +++ b/trunk/drivers/mtd/mtdblock.c @@ -19,11 +19,13 @@ #include #include +#include + static struct mtdblk_dev { struct mtd_info *mtd; int count; - struct semaphore cache_sem; + struct mutex cache_mutex; unsigned char *cache_data; unsigned long cache_offset; unsigned int cache_size; @@ -284,7 +286,7 @@ static int mtdblock_open(struct mtd_blktrans_dev *mbd) mtdblk->count = 1; mtdblk->mtd = mtd; - init_MUTEX (&mtdblk->cache_sem); + mutex_init(&mtdblk->cache_mutex); mtdblk->cache_state = STATE_EMPTY; if ((mtdblk->mtd->flags & MTD_CAP_RAM) != MTD_CAP_RAM && mtdblk->mtd->erasesize) { @@ -306,9 +308,9 @@ static int mtdblock_release(struct mtd_blktrans_dev *mbd) DEBUG(MTD_DEBUG_LEVEL1, "mtdblock_release\n"); - down(&mtdblk->cache_sem); + mutex_lock(&mtdblk->cache_mutex); write_cached_data(mtdblk); - up(&mtdblk->cache_sem); + mutex_unlock(&mtdblk->cache_mutex); if (!--mtdblk->count) { /* It was the last usage. Free the device */ @@ -327,9 +329,9 @@ static int mtdblock_flush(struct mtd_blktrans_dev *dev) { struct mtdblk_dev *mtdblk = mtdblks[dev->devnum]; - down(&mtdblk->cache_sem); + mutex_lock(&mtdblk->cache_mutex); write_cached_data(mtdblk); - up(&mtdblk->cache_sem); + mutex_unlock(&mtdblk->cache_mutex); if (mtdblk->mtd->sync) mtdblk->mtd->sync(mtdblk->mtd); diff --git a/trunk/drivers/mtd/mtdcore.c b/trunk/drivers/mtd/mtdcore.c index dade02ab0687..9905870f56e5 100644 --- a/trunk/drivers/mtd/mtdcore.c +++ b/trunk/drivers/mtd/mtdcore.c @@ -19,15 +19,13 @@ #include #include #include -#ifdef CONFIG_PROC_FS #include -#endif #include /* These are exported solely for the purpose of mtd_blkdevs.c. You should not use them for _anything_ else */ -DECLARE_MUTEX(mtd_table_mutex); +DEFINE_MUTEX(mtd_table_mutex); struct mtd_info *mtd_table[MAX_MTD_DEVICES]; EXPORT_SYMBOL_GPL(mtd_table_mutex); @@ -49,7 +47,7 @@ int add_mtd_device(struct mtd_info *mtd) { int i; - down(&mtd_table_mutex); + mutex_lock(&mtd_table_mutex); for (i=0; i < MAX_MTD_DEVICES; i++) if (!mtd_table[i]) { @@ -67,7 +65,7 @@ int add_mtd_device(struct mtd_info *mtd) not->add(mtd); } - up(&mtd_table_mutex); + mutex_unlock(&mtd_table_mutex); /* We _know_ we aren't being removed, because our caller is still holding us here. So none of this try_ nonsense, and no bitching about it @@ -76,7 +74,7 @@ int add_mtd_device(struct mtd_info *mtd) return 0; } - up(&mtd_table_mutex); + mutex_unlock(&mtd_table_mutex); return 1; } @@ -94,7 +92,7 @@ int del_mtd_device (struct mtd_info *mtd) { int ret; - down(&mtd_table_mutex); + mutex_lock(&mtd_table_mutex); if (mtd_table[mtd->index] != mtd) { ret = -ENODEV; @@ -118,7 +116,7 @@ int del_mtd_device (struct mtd_info *mtd) ret = 0; } - up(&mtd_table_mutex); + mutex_unlock(&mtd_table_mutex); return ret; } @@ -135,7 +133,7 @@ void register_mtd_user (struct mtd_notifier *new) { int i; - down(&mtd_table_mutex); + mutex_lock(&mtd_table_mutex); list_add(&new->list, &mtd_notifiers); @@ -145,7 +143,7 @@ void register_mtd_user (struct mtd_notifier *new) if (mtd_table[i]) new->add(mtd_table[i]); - up(&mtd_table_mutex); + mutex_unlock(&mtd_table_mutex); } /** @@ -162,7 +160,7 @@ int unregister_mtd_user (struct mtd_notifier *old) { int i; - down(&mtd_table_mutex); + mutex_lock(&mtd_table_mutex); module_put(THIS_MODULE); @@ -171,7 +169,7 @@ int unregister_mtd_user (struct mtd_notifier *old) old->remove(mtd_table[i]); list_del(&old->list); - up(&mtd_table_mutex); + mutex_unlock(&mtd_table_mutex); return 0; } @@ -193,7 +191,7 @@ struct mtd_info *get_mtd_device(struct mtd_info *mtd, int num) struct mtd_info *ret = NULL; int i; - down(&mtd_table_mutex); + mutex_lock(&mtd_table_mutex); if (num == -1) { for (i=0; i< MAX_MTD_DEVICES; i++) @@ -211,7 +209,7 @@ struct mtd_info *get_mtd_device(struct mtd_info *mtd, int num) if (ret) ret->usecount++; - up(&mtd_table_mutex); + mutex_unlock(&mtd_table_mutex); return ret; } @@ -219,9 +217,9 @@ void put_mtd_device(struct mtd_info *mtd) { int c; - down(&mtd_table_mutex); + mutex_lock(&mtd_table_mutex); c = --mtd->usecount; - up(&mtd_table_mutex); + mutex_unlock(&mtd_table_mutex); BUG_ON(c < 0); module_put(mtd->owner); @@ -296,10 +294,11 @@ EXPORT_SYMBOL(unregister_mtd_user); EXPORT_SYMBOL(default_mtd_writev); EXPORT_SYMBOL(default_mtd_readv); +#ifdef CONFIG_PROC_FS + /*====================================================================*/ /* Support for /proc/mtd */ -#ifdef CONFIG_PROC_FS static struct proc_dir_entry *proc_mtd; static inline int mtd_proc_info (char *buf, int i) @@ -319,7 +318,7 @@ static int mtd_read_proc (char *page, char **start, off_t off, int count, int len, l, i; off_t begin = 0; - down(&mtd_table_mutex); + mutex_lock(&mtd_table_mutex); len = sprintf(page, "dev: size erasesize name\n"); for (i=0; i< MAX_MTD_DEVICES; i++) { @@ -337,38 +336,34 @@ static int mtd_read_proc (char *page, char **start, off_t off, int count, *eof = 1; done: - up(&mtd_table_mutex); + mutex_unlock(&mtd_table_mutex); if (off >= len+begin) return 0; *start = page + (off-begin); return ((count < begin+len-off) ? count : begin+len-off); } -#endif /* CONFIG_PROC_FS */ - /*====================================================================*/ /* Init code */ static int __init init_mtd(void) { -#ifdef CONFIG_PROC_FS if ((proc_mtd = create_proc_entry( "mtd", 0, NULL ))) proc_mtd->read_proc = mtd_read_proc; -#endif return 0; } static void __exit cleanup_mtd(void) { -#ifdef CONFIG_PROC_FS if (proc_mtd) remove_proc_entry( "mtd", NULL); -#endif } module_init(init_mtd); module_exit(cleanup_mtd); +#endif /* CONFIG_PROC_FS */ + MODULE_LICENSE("GPL"); MODULE_AUTHOR("David Woodhouse "); diff --git a/trunk/drivers/mtd/nand/Kconfig b/trunk/drivers/mtd/nand/Kconfig index 1fc4c134d939..cfe288a6e853 100644 --- a/trunk/drivers/mtd/nand/Kconfig +++ b/trunk/drivers/mtd/nand/Kconfig @@ -178,17 +178,16 @@ config MTD_NAND_DISKONCHIP_BBTWRITE Even if you leave this disabled, you can enable BBT writes at module load time (assuming you build diskonchip as a module) with the module parameter "inftl_bbt_write=1". - - config MTD_NAND_SHARPSL - bool "Support for NAND Flash on Sharp SL Series (C7xx + others)" - depends on MTD_NAND && ARCH_PXA - - config MTD_NAND_NANDSIM - bool "Support for NAND Flash Simulator" - depends on MTD_NAND && MTD_PARTITIONS +config MTD_NAND_SHARPSL + tristate "Support for NAND Flash on Sharp SL Series (C7xx + others)" + depends on MTD_NAND && ARCH_PXA + +config MTD_NAND_NANDSIM + tristate "Support for NAND Flash Simulator" + depends on MTD_NAND && MTD_PARTITIONS help The simulator may simulate verious NAND flash chips for the MTD nand layer. - + endmenu diff --git a/trunk/drivers/mtd/nand/au1550nd.c b/trunk/drivers/mtd/nand/au1550nd.c index 201e1362da14..bde3550910a2 100644 --- a/trunk/drivers/mtd/nand/au1550nd.c +++ b/trunk/drivers/mtd/nand/au1550nd.c @@ -55,8 +55,6 @@ static const struct mtd_partition partition_info[] = { .size = MTDPART_SIZ_FULL } }; -#define NB_OF(x) (sizeof(x)/sizeof(x[0])) - /** * au_read_byte - read one byte from the chip @@ -462,7 +460,7 @@ int __init au1xxx_nand_init (void) } /* Register the partitions */ - add_mtd_partitions(au1550_mtd, partition_info, NB_OF(partition_info)); + add_mtd_partitions(au1550_mtd, partition_info, ARRAY_SIZE(partition_info)); return 0; diff --git a/trunk/drivers/mtd/nand/nand_base.c b/trunk/drivers/mtd/nand/nand_base.c index 5d222460b42a..95e96fa1fceb 100644 --- a/trunk/drivers/mtd/nand/nand_base.c +++ b/trunk/drivers/mtd/nand/nand_base.c @@ -80,6 +80,7 @@ #include #include #include +#include #include #ifdef CONFIG_MTD_PARTITIONS @@ -515,6 +516,8 @@ static int nand_block_checkbad (struct mtd_info *mtd, loff_t ofs, int getchip, i return nand_isbad_bbt (mtd, ofs, allowbbt); } +DEFINE_LED_TRIGGER(nand_led_trigger); + /* * Wait for the ready pin, after a command * The timeout is catched later. @@ -524,12 +527,14 @@ static void nand_wait_ready(struct mtd_info *mtd) struct nand_chip *this = mtd->priv; unsigned long timeo = jiffies + 2; + led_trigger_event(nand_led_trigger, LED_FULL); /* wait until command is processed or timeout occures */ do { if (this->dev_ready(mtd)) - return; + break; touch_softlockup_watchdog(); } while (time_before(jiffies, timeo)); + led_trigger_event(nand_led_trigger, LED_OFF); } /** @@ -817,6 +822,8 @@ static int nand_wait(struct mtd_info *mtd, struct nand_chip *this, int state) else timeo += (HZ * 20) / 1000; + led_trigger_event(nand_led_trigger, LED_FULL); + /* Apply this short delay always to ensure that we do wait tWB in * any case on any machine. */ ndelay (100); @@ -840,6 +847,8 @@ static int nand_wait(struct mtd_info *mtd, struct nand_chip *this, int state) } cond_resched(); } + led_trigger_event(nand_led_trigger, LED_OFF); + status = (int) this->read_byte(mtd); return status; } @@ -2724,6 +2733,21 @@ void nand_release (struct mtd_info *mtd) EXPORT_SYMBOL_GPL (nand_scan); EXPORT_SYMBOL_GPL (nand_release); + +static int __init nand_base_init(void) +{ + led_trigger_register_simple("nand-disk", &nand_led_trigger); + return 0; +} + +static void __exit nand_base_exit(void) +{ + led_trigger_unregister_simple(nand_led_trigger); +} + +module_init(nand_base_init); +module_exit(nand_base_exit); + MODULE_LICENSE ("GPL"); MODULE_AUTHOR ("Steven J. Hill , Thomas Gleixner "); MODULE_DESCRIPTION ("Generic NAND flash driver code"); diff --git a/trunk/drivers/mtd/redboot.c b/trunk/drivers/mtd/redboot.c index 8815c8dbef2d..c077d2ec9cdd 100644 --- a/trunk/drivers/mtd/redboot.c +++ b/trunk/drivers/mtd/redboot.c @@ -85,10 +85,6 @@ static int parse_redboot_partitions(struct mtd_info *master, numslots = (master->erasesize / sizeof(struct fis_image_desc)); for (i = 0; i < numslots; i++) { - if (buf[i].name[0] == 0xff) { - i = numslots; - break; - } if (!memcmp(buf[i].name, "FIS directory", 14)) { /* This is apparently the FIS directory entry for the * FIS directory itself. The FIS directory size is @@ -128,7 +124,7 @@ static int parse_redboot_partitions(struct mtd_info *master, struct fis_list *new_fl, **prev; if (buf[i].name[0] == 0xff) - break; + continue; if (!redboot_checksum(&buf[i])) break; diff --git a/trunk/drivers/net/3c59x.c b/trunk/drivers/net/3c59x.c index 70f63891b19c..274b0138d442 100644 --- a/trunk/drivers/net/3c59x.c +++ b/trunk/drivers/net/3c59x.c @@ -788,7 +788,7 @@ struct vortex_private { int options; /* User-settable misc. driver options. */ unsigned int media_override:4, /* Passed-in media type. */ default_media:4, /* Read from the EEPROM/Wn3_Config. */ - full_duplex:1, force_fd:1, autoselect:1, + full_duplex:1, autoselect:1, bus_master:1, /* Vortex can only do a fragment bus-m. */ full_bus_master_tx:1, full_bus_master_rx:2, /* Boomerang */ flow_ctrl:1, /* Use 802.3x flow control (PAUSE only) */ @@ -1633,12 +1633,6 @@ vortex_set_duplex(struct net_device *dev) ((vp->full_duplex && vp->flow_ctrl && vp->partner_flow_ctrl) ? 0x100 : 0), ioaddr + Wn3_MAC_Ctrl); - - issue_and_wait(dev, TxReset); - /* - * Don't reset the PHY - that upsets autonegotiation during DHCP operations. - */ - issue_and_wait(dev, RxReset|0x04); } static void vortex_check_media(struct net_device *dev, unsigned int init) @@ -1663,7 +1657,7 @@ vortex_up(struct net_device *dev) struct vortex_private *vp = netdev_priv(dev); void __iomem *ioaddr = vp->ioaddr; unsigned int config; - int i; + int i, mii_reg1, mii_reg5; if (VORTEX_PCI(vp)) { pci_set_power_state(VORTEX_PCI(vp), PCI_D0); /* Go active */ @@ -1723,14 +1717,23 @@ vortex_up(struct net_device *dev) printk(KERN_DEBUG "vortex_up(): writing 0x%x to InternalConfig\n", config); iowrite32(config, ioaddr + Wn3_Config); - netif_carrier_off(dev); if (dev->if_port == XCVR_MII || dev->if_port == XCVR_NWAY) { EL3WINDOW(4); + mii_reg1 = mdio_read(dev, vp->phys[0], MII_BMSR); + mii_reg5 = mdio_read(dev, vp->phys[0], MII_LPA); + vp->partner_flow_ctrl = ((mii_reg5 & 0x0400) != 0); + vortex_check_media(dev, 1); } else vortex_set_duplex(dev); + issue_and_wait(dev, TxReset); + /* + * Don't reset the PHY - that upsets autonegotiation during DHCP operations. + */ + issue_and_wait(dev, RxReset|0x04); + iowrite16(SetStatusEnb | 0x00, ioaddr + EL3_CMD); @@ -2083,16 +2086,14 @@ vortex_error(struct net_device *dev, int status) } if (tx_status & 0x14) vp->stats.tx_fifo_errors++; if (tx_status & 0x38) vp->stats.tx_aborted_errors++; + if (tx_status & 0x08) vp->xstats.tx_max_collisions++; iowrite8(0, ioaddr + TxStatus); if (tx_status & 0x30) { /* txJabber or txUnderrun */ do_tx_reset = 1; - } else if (tx_status & 0x08) { /* maxCollisions */ - vp->xstats.tx_max_collisions++; - if (vp->drv_flags & MAX_COLLISION_RESET) { - do_tx_reset = 1; - reset_mask = 0x0108; /* Reset interface logic, but not download logic */ - } - } else { /* Merely re-enable the transmitter. */ + } else if ((tx_status & 0x08) && (vp->drv_flags & MAX_COLLISION_RESET)) { /* maxCollisions */ + do_tx_reset = 1; + reset_mask = 0x0108; /* Reset interface logic, but not download logic */ + } else { /* Merely re-enable the transmitter. */ iowrite16(TxEnable, ioaddr + EL3_CMD); } } diff --git a/trunk/drivers/net/Kconfig b/trunk/drivers/net/Kconfig index e20b849a22e8..bdaaad8f2123 100644 --- a/trunk/drivers/net/Kconfig +++ b/trunk/drivers/net/Kconfig @@ -2313,13 +2313,11 @@ config S2IO_NAPI endmenu -if !UML source "drivers/net/tokenring/Kconfig" source "drivers/net/wireless/Kconfig" source "drivers/net/pcmcia/Kconfig" -endif source "drivers/net/wan/Kconfig" diff --git a/trunk/drivers/net/arcnet/com90xx.c b/trunk/drivers/net/arcnet/com90xx.c index 43150b2bd13f..0d45553ff75c 100644 --- a/trunk/drivers/net/arcnet/com90xx.c +++ b/trunk/drivers/net/arcnet/com90xx.c @@ -125,11 +125,11 @@ static void __init com90xx_probe(void) if (!io && !irq && !shmem && !*device && com90xx_skip_probe) return; - shmems = kzalloc(((0x10000-0xa0000) / 0x800) * sizeof(unsigned long), + shmems = kzalloc(((0x100000-0xa0000) / 0x800) * sizeof(unsigned long), GFP_KERNEL); if (!shmems) return; - iomem = kzalloc(((0x10000-0xa0000) / 0x800) * sizeof(void __iomem *), + iomem = kzalloc(((0x100000-0xa0000) / 0x800) * sizeof(void __iomem *), GFP_KERNEL); if (!iomem) { kfree(shmems); diff --git a/trunk/drivers/net/ibmveth.c b/trunk/drivers/net/ibmveth.c index ceb98fd398af..52d01027d9e7 100644 --- a/trunk/drivers/net/ibmveth.c +++ b/trunk/drivers/net/ibmveth.c @@ -235,7 +235,7 @@ static void ibmveth_replenish_buffer_pool(struct ibmveth_adapter *adapter, struc lpar_rc = h_add_logical_lan_buffer(adapter->vdev->unit_address, desc.desc); - if(lpar_rc != H_Success) { + if(lpar_rc != H_SUCCESS) { pool->free_map[free_index] = index; pool->skbuff[index] = NULL; pool->consumer_index--; @@ -373,7 +373,7 @@ static void ibmveth_rxq_recycle_buffer(struct ibmveth_adapter *adapter) lpar_rc = h_add_logical_lan_buffer(adapter->vdev->unit_address, desc.desc); - if(lpar_rc != H_Success) { + if(lpar_rc != H_SUCCESS) { ibmveth_debug_printk("h_add_logical_lan_buffer failed during recycle rc=%ld", lpar_rc); ibmveth_remove_buffer_from_pool(adapter, adapter->rx_queue.queue_addr[adapter->rx_queue.index].correlator); } @@ -511,7 +511,7 @@ static int ibmveth_open(struct net_device *netdev) adapter->filter_list_dma, mac_address); - if(lpar_rc != H_Success) { + if(lpar_rc != H_SUCCESS) { ibmveth_error_printk("h_register_logical_lan failed with %ld\n", lpar_rc); ibmveth_error_printk("buffer TCE:0x%lx filter TCE:0x%lx rxq desc:0x%lx MAC:0x%lx\n", adapter->buffer_list_dma, @@ -527,7 +527,7 @@ static int ibmveth_open(struct net_device *netdev) ibmveth_error_printk("unable to request irq 0x%x, rc %d\n", netdev->irq, rc); do { rc = h_free_logical_lan(adapter->vdev->unit_address); - } while (H_isLongBusy(rc) || (rc == H_Busy)); + } while (H_IS_LONG_BUSY(rc) || (rc == H_BUSY)); ibmveth_cleanup(adapter); return rc; @@ -556,9 +556,9 @@ static int ibmveth_close(struct net_device *netdev) do { lpar_rc = h_free_logical_lan(adapter->vdev->unit_address); - } while (H_isLongBusy(lpar_rc) || (lpar_rc == H_Busy)); + } while (H_IS_LONG_BUSY(lpar_rc) || (lpar_rc == H_BUSY)); - if(lpar_rc != H_Success) + if(lpar_rc != H_SUCCESS) { ibmveth_error_printk("h_free_logical_lan failed with %lx, continuing with close\n", lpar_rc); @@ -693,9 +693,9 @@ static int ibmveth_start_xmit(struct sk_buff *skb, struct net_device *netdev) desc[4].desc, desc[5].desc, correlator); - } while ((lpar_rc == H_Busy) && (retry_count--)); + } while ((lpar_rc == H_BUSY) && (retry_count--)); - if(lpar_rc != H_Success && lpar_rc != H_Dropped) { + if(lpar_rc != H_SUCCESS && lpar_rc != H_DROPPED) { int i; ibmveth_error_printk("tx: h_send_logical_lan failed with rc=%ld\n", lpar_rc); for(i = 0; i < 6; i++) { @@ -786,14 +786,14 @@ static int ibmveth_poll(struct net_device *netdev, int *budget) /* we think we are done - reenable interrupts, then check once more to make sure we are done */ lpar_rc = h_vio_signal(adapter->vdev->unit_address, VIO_IRQ_ENABLE); - ibmveth_assert(lpar_rc == H_Success); + ibmveth_assert(lpar_rc == H_SUCCESS); netif_rx_complete(netdev); if(ibmveth_rxq_pending_buffer(adapter) && netif_rx_reschedule(netdev, frames_processed)) { lpar_rc = h_vio_signal(adapter->vdev->unit_address, VIO_IRQ_DISABLE); - ibmveth_assert(lpar_rc == H_Success); + ibmveth_assert(lpar_rc == H_SUCCESS); more_work = 1; goto restart_poll; } @@ -813,7 +813,7 @@ static irqreturn_t ibmveth_interrupt(int irq, void *dev_instance, struct pt_regs if(netif_rx_schedule_prep(netdev)) { lpar_rc = h_vio_signal(adapter->vdev->unit_address, VIO_IRQ_DISABLE); - ibmveth_assert(lpar_rc == H_Success); + ibmveth_assert(lpar_rc == H_SUCCESS); __netif_rx_schedule(netdev); } return IRQ_HANDLED; @@ -835,7 +835,7 @@ static void ibmveth_set_multicast_list(struct net_device *netdev) IbmVethMcastEnableRecv | IbmVethMcastDisableFiltering, 0); - if(lpar_rc != H_Success) { + if(lpar_rc != H_SUCCESS) { ibmveth_error_printk("h_multicast_ctrl rc=%ld when entering promisc mode\n", lpar_rc); } } else { @@ -847,7 +847,7 @@ static void ibmveth_set_multicast_list(struct net_device *netdev) IbmVethMcastDisableFiltering | IbmVethMcastClearFilterTable, 0); - if(lpar_rc != H_Success) { + if(lpar_rc != H_SUCCESS) { ibmveth_error_printk("h_multicast_ctrl rc=%ld when attempting to clear filter table\n", lpar_rc); } /* add the addresses to the filter table */ @@ -858,7 +858,7 @@ static void ibmveth_set_multicast_list(struct net_device *netdev) lpar_rc = h_multicast_ctrl(adapter->vdev->unit_address, IbmVethMcastAddFilter, mcast_addr); - if(lpar_rc != H_Success) { + if(lpar_rc != H_SUCCESS) { ibmveth_error_printk("h_multicast_ctrl rc=%ld when adding an entry to the filter table\n", lpar_rc); } } @@ -867,7 +867,7 @@ static void ibmveth_set_multicast_list(struct net_device *netdev) lpar_rc = h_multicast_ctrl(adapter->vdev->unit_address, IbmVethMcastEnableFiltering, 0); - if(lpar_rc != H_Success) { + if(lpar_rc != H_SUCCESS) { ibmveth_error_printk("h_multicast_ctrl rc=%ld when enabling filtering\n", lpar_rc); } } diff --git a/trunk/drivers/net/netconsole.c b/trunk/drivers/net/netconsole.c index edd1b5306b16..75b35ad760de 100644 --- a/trunk/drivers/net/netconsole.c +++ b/trunk/drivers/net/netconsole.c @@ -94,7 +94,7 @@ static struct console netconsole = { static int option_setup(char *opt) { configured = !netpoll_parse_options(&np, opt); - return 0; + return 1; } __setup("netconsole=", option_setup); diff --git a/trunk/drivers/net/pcmcia/xirc2ps_cs.c b/trunk/drivers/net/pcmcia/xirc2ps_cs.c index a92a3134c833..71f45056a70c 100644 --- a/trunk/drivers/net/pcmcia/xirc2ps_cs.c +++ b/trunk/drivers/net/pcmcia/xirc2ps_cs.c @@ -1940,7 +1940,7 @@ static int __init setup_xirc2ps_cs(char *str) MAYBE_SET(lockup_hack, 6); #undef MAYBE_SET - return 0; + return 1; } __setup("xirc2ps_cs=", setup_xirc2ps_cs); diff --git a/trunk/drivers/net/tg3.c b/trunk/drivers/net/tg3.c index 964c09644832..770e6b6cec60 100644 --- a/trunk/drivers/net/tg3.c +++ b/trunk/drivers/net/tg3.c @@ -69,8 +69,8 @@ #define DRV_MODULE_NAME "tg3" #define PFX DRV_MODULE_NAME ": " -#define DRV_MODULE_VERSION "3.55" -#define DRV_MODULE_RELDATE "Mar 27, 2006" +#define DRV_MODULE_VERSION "3.56" +#define DRV_MODULE_RELDATE "Apr 1, 2006" #define TG3_DEF_MAC_MODE 0 #define TG3_DEF_RX_MODE 0 @@ -497,40 +497,33 @@ static void tg3_write_mem(struct tg3 *tp, u32 off, u32 val) unsigned long flags; spin_lock_irqsave(&tp->indirect_lock, flags); - if (tp->write32 != tg3_write_indirect_reg32) { - tw32_f(TG3PCI_MEM_WIN_BASE_ADDR, off); - tw32_f(TG3PCI_MEM_WIN_DATA, val); + pci_write_config_dword(tp->pdev, TG3PCI_MEM_WIN_BASE_ADDR, off); + pci_write_config_dword(tp->pdev, TG3PCI_MEM_WIN_DATA, val); - /* Always leave this as zero. */ - tw32_f(TG3PCI_MEM_WIN_BASE_ADDR, 0); - } else { - pci_write_config_dword(tp->pdev, TG3PCI_MEM_WIN_BASE_ADDR, off); - pci_write_config_dword(tp->pdev, TG3PCI_MEM_WIN_DATA, val); - - /* Always leave this as zero. */ - pci_write_config_dword(tp->pdev, TG3PCI_MEM_WIN_BASE_ADDR, 0); - } + /* Always leave this as zero. */ + pci_write_config_dword(tp->pdev, TG3PCI_MEM_WIN_BASE_ADDR, 0); spin_unlock_irqrestore(&tp->indirect_lock, flags); } +static void tg3_write_mem_fast(struct tg3 *tp, u32 off, u32 val) +{ + /* If no workaround is needed, write to mem space directly */ + if (tp->write32 != tg3_write_indirect_reg32) + tw32(NIC_SRAM_WIN_BASE + off, val); + else + tg3_write_mem(tp, off, val); +} + static void tg3_read_mem(struct tg3 *tp, u32 off, u32 *val) { unsigned long flags; spin_lock_irqsave(&tp->indirect_lock, flags); - if (tp->write32 != tg3_write_indirect_reg32) { - tw32_f(TG3PCI_MEM_WIN_BASE_ADDR, off); - *val = tr32(TG3PCI_MEM_WIN_DATA); + pci_write_config_dword(tp->pdev, TG3PCI_MEM_WIN_BASE_ADDR, off); + pci_read_config_dword(tp->pdev, TG3PCI_MEM_WIN_DATA, val); - /* Always leave this as zero. */ - tw32_f(TG3PCI_MEM_WIN_BASE_ADDR, 0); - } else { - pci_write_config_dword(tp->pdev, TG3PCI_MEM_WIN_BASE_ADDR, off); - pci_read_config_dword(tp->pdev, TG3PCI_MEM_WIN_DATA, val); - - /* Always leave this as zero. */ - pci_write_config_dword(tp->pdev, TG3PCI_MEM_WIN_BASE_ADDR, 0); - } + /* Always leave this as zero. */ + pci_write_config_dword(tp->pdev, TG3PCI_MEM_WIN_BASE_ADDR, 0); spin_unlock_irqrestore(&tp->indirect_lock, flags); } @@ -1374,12 +1367,12 @@ static int tg3_set_power_state(struct tg3 *tp, pci_power_t state) } } - tg3_write_sig_post_reset(tp, RESET_KIND_SHUTDOWN); - /* Finally, set the new power state. */ pci_write_config_word(tp->pdev, pm + PCI_PM_CTRL, power_control); udelay(100); /* Delay after power state change */ + tg3_write_sig_post_reset(tp, RESET_KIND_SHUTDOWN); + return 0; } @@ -6547,11 +6540,11 @@ static void tg3_timer(unsigned long __opaque) if (tp->tg3_flags & TG3_FLAG_ENABLE_ASF) { u32 val; - tg3_write_mem(tp, NIC_SRAM_FW_CMD_MBOX, - FWCMD_NICDRV_ALIVE2); - tg3_write_mem(tp, NIC_SRAM_FW_CMD_LEN_MBOX, 4); + tg3_write_mem_fast(tp, NIC_SRAM_FW_CMD_MBOX, + FWCMD_NICDRV_ALIVE2); + tg3_write_mem_fast(tp, NIC_SRAM_FW_CMD_LEN_MBOX, 4); /* 5 seconds timeout */ - tg3_write_mem(tp, NIC_SRAM_FW_CMD_DATA_MBOX, 5); + tg3_write_mem_fast(tp, NIC_SRAM_FW_CMD_DATA_MBOX, 5); val = tr32(GRC_RX_CPU_EVENT); val |= (1 << 14); tw32(GRC_RX_CPU_EVENT, val); diff --git a/trunk/drivers/net/tokenring/Kconfig b/trunk/drivers/net/tokenring/Kconfig index e4cfc80b283b..99c4c1922f19 100644 --- a/trunk/drivers/net/tokenring/Kconfig +++ b/trunk/drivers/net/tokenring/Kconfig @@ -3,7 +3,7 @@ # menu "Token Ring devices" - depends on NETDEVICES + depends on NETDEVICES && !UML # So far, we only have PCI, ISA, and MCA token ring devices config TR diff --git a/trunk/drivers/net/wireless/Kconfig b/trunk/drivers/net/wireless/Kconfig index f85e30190008..bad09ebdb50b 100644 --- a/trunk/drivers/net/wireless/Kconfig +++ b/trunk/drivers/net/wireless/Kconfig @@ -356,7 +356,7 @@ config PCI_HERMES config ATMEL tristate "Atmel at76c50x chipset 802.11b support" - depends on NET_RADIO + depends on NET_RADIO && (PCI || PCMCIA) select FW_LOADER select CRC32 ---help--- diff --git a/trunk/drivers/pcmcia/vrc4171_card.c b/trunk/drivers/pcmcia/vrc4171_card.c index 0574efd7828a..459e6e1946fd 100644 --- a/trunk/drivers/pcmcia/vrc4171_card.c +++ b/trunk/drivers/pcmcia/vrc4171_card.c @@ -634,7 +634,7 @@ static void vrc4171_remove_sockets(void) static int __devinit vrc4171_card_setup(char *options) { if (options == NULL || *options == '\0') - return 0; + return 1; if (strncmp(options, "irq:", 4) == 0) { int irq; @@ -644,7 +644,7 @@ static int __devinit vrc4171_card_setup(char *options) vrc4171_irq = irq; if (*options != ',') - return 0; + return 1; options++; } @@ -663,10 +663,10 @@ static int __devinit vrc4171_card_setup(char *options) } if (*options != ',') - return 0; + return 1; options++; } else - return 0; + return 1; } @@ -688,7 +688,7 @@ static int __devinit vrc4171_card_setup(char *options) } if (*options != ',') - return 0; + return 1; options++; if (strncmp(options, "memnoprobe", 10) == 0) @@ -700,7 +700,7 @@ static int __devinit vrc4171_card_setup(char *options) } } - return 0; + return 1; } __setup("vrc4171_card=", vrc4171_card_setup); diff --git a/trunk/drivers/pcmcia/vrc4173_cardu.c b/trunk/drivers/pcmcia/vrc4173_cardu.c index 57f38dba0a48..6004196f7cc1 100644 --- a/trunk/drivers/pcmcia/vrc4173_cardu.c +++ b/trunk/drivers/pcmcia/vrc4173_cardu.c @@ -516,7 +516,7 @@ static int __devinit vrc4173_cardu_probe(struct pci_dev *dev, static int __devinit vrc4173_cardu_setup(char *options) { if (options == NULL || *options == '\0') - return 0; + return 1; if (strncmp(options, "cardu1:", 7) == 0) { options += 7; @@ -527,9 +527,9 @@ static int __devinit vrc4173_cardu_setup(char *options) } if (*options != ',') - return 0; + return 1; } else - return 0; + return 1; } if (strncmp(options, "cardu2:", 7) == 0) { @@ -538,7 +538,7 @@ static int __devinit vrc4173_cardu_setup(char *options) cardu_sockets[CARDU2].noprobe = 1; } - return 0; + return 1; } __setup("vrc4173_cardu=", vrc4173_cardu_setup); diff --git a/trunk/drivers/scsi/ahci.c b/trunk/drivers/scsi/ahci.c index ffba65656a83..1bd82c4e52a0 100644 --- a/trunk/drivers/scsi/ahci.c +++ b/trunk/drivers/scsi/ahci.c @@ -293,6 +293,10 @@ static const struct pci_device_id ahci_pci_tbl[] = { board_ahci }, /* JMicron JMB360 */ { 0x197b, 0x2363, PCI_ANY_ID, PCI_ANY_ID, 0, 0, board_ahci }, /* JMicron JMB363 */ + { PCI_VENDOR_ID_ATI, 0x4380, PCI_ANY_ID, PCI_ANY_ID, 0, 0, + board_ahci }, /* ATI SB600 non-raid */ + { PCI_VENDOR_ID_ATI, 0x4381, PCI_ANY_ID, PCI_ANY_ID, 0, 0, + board_ahci }, /* ATI SB600 raid */ { } /* terminate list */ }; diff --git a/trunk/drivers/scsi/ata_piix.c b/trunk/drivers/scsi/ata_piix.c index 2d5be84d8bd4..24e71b555172 100644 --- a/trunk/drivers/scsi/ata_piix.c +++ b/trunk/drivers/scsi/ata_piix.c @@ -301,7 +301,7 @@ static struct piix_map_db ich6_map_db = { .mask = 0x3, .map = { /* PM PS SM SS MAP */ - { P0, P1, P2, P3 }, /* 00b */ + { P0, P2, P1, P3 }, /* 00b */ { IDE, IDE, P1, P3 }, /* 01b */ { P0, P2, IDE, IDE }, /* 10b */ { RV, RV, RV, RV }, @@ -312,7 +312,7 @@ static struct piix_map_db ich6m_map_db = { .mask = 0x3, .map = { /* PM PS SM SS MAP */ - { P0, P1, P2, P3 }, /* 00b */ + { P0, P2, RV, RV }, /* 00b */ { RV, RV, RV, RV }, { P0, P2, IDE, IDE }, /* 10b */ { RV, RV, RV, RV }, diff --git a/trunk/drivers/scsi/ibmmca.c b/trunk/drivers/scsi/ibmmca.c index 3a8462e8d063..24eb59e143a9 100644 --- a/trunk/drivers/scsi/ibmmca.c +++ b/trunk/drivers/scsi/ibmmca.c @@ -2488,7 +2488,7 @@ static int option_setup(char *str) } ints[0] = i - 1; internal_ibmmca_scsi_setup(cur, ints); - return 0; + return 1; } __setup("ibmmcascsi=", option_setup); diff --git a/trunk/drivers/scsi/ibmvscsi/rpa_vscsi.c b/trunk/drivers/scsi/ibmvscsi/rpa_vscsi.c index f47dd87c05e7..892e8ed63091 100644 --- a/trunk/drivers/scsi/ibmvscsi/rpa_vscsi.c +++ b/trunk/drivers/scsi/ibmvscsi/rpa_vscsi.c @@ -80,7 +80,7 @@ void ibmvscsi_release_crq_queue(struct crq_queue *queue, tasklet_kill(&hostdata->srp_task); do { rc = plpar_hcall_norets(H_FREE_CRQ, vdev->unit_address); - } while ((rc == H_Busy) || (H_isLongBusy(rc))); + } while ((rc == H_BUSY) || (H_IS_LONG_BUSY(rc))); dma_unmap_single(hostdata->dev, queue->msg_token, queue->size * sizeof(*queue->msgs), DMA_BIDIRECTIONAL); @@ -230,7 +230,7 @@ int ibmvscsi_init_crq_queue(struct crq_queue *queue, rc = plpar_hcall_norets(H_REG_CRQ, vdev->unit_address, queue->msg_token, PAGE_SIZE); - if (rc == H_Resource) + if (rc == H_RESOURCE) /* maybe kexecing and resource is busy. try a reset */ rc = ibmvscsi_reset_crq_queue(queue, hostdata); @@ -269,7 +269,7 @@ int ibmvscsi_init_crq_queue(struct crq_queue *queue, req_irq_failed: do { rc = plpar_hcall_norets(H_FREE_CRQ, vdev->unit_address); - } while ((rc == H_Busy) || (H_isLongBusy(rc))); + } while ((rc == H_BUSY) || (H_IS_LONG_BUSY(rc))); reg_crq_failed: dma_unmap_single(hostdata->dev, queue->msg_token, @@ -295,7 +295,7 @@ int ibmvscsi_reenable_crq_queue(struct crq_queue *queue, /* Re-enable the CRQ */ do { rc = plpar_hcall_norets(H_ENABLE_CRQ, vdev->unit_address); - } while ((rc == H_InProgress) || (rc == H_Busy) || (H_isLongBusy(rc))); + } while ((rc == H_IN_PROGRESS) || (rc == H_BUSY) || (H_IS_LONG_BUSY(rc))); if (rc) printk(KERN_ERR "ibmvscsi: Error %d enabling adapter\n", rc); @@ -317,7 +317,7 @@ int ibmvscsi_reset_crq_queue(struct crq_queue *queue, /* Close the CRQ */ do { rc = plpar_hcall_norets(H_FREE_CRQ, vdev->unit_address); - } while ((rc == H_Busy) || (H_isLongBusy(rc))); + } while ((rc == H_BUSY) || (H_IS_LONG_BUSY(rc))); /* Clean out the queue */ memset(queue->msgs, 0x00, PAGE_SIZE); diff --git a/trunk/drivers/scsi/libata-core.c b/trunk/drivers/scsi/libata-core.c index 21b0ed583b8a..e63c1ff1e102 100644 --- a/trunk/drivers/scsi/libata-core.c +++ b/trunk/drivers/scsi/libata-core.c @@ -278,7 +278,7 @@ static void ata_unpack_xfermask(unsigned int xfer_mask, } static const struct ata_xfer_ent { - unsigned int shift, bits; + int shift, bits; u8 base; } ata_xfer_tbl[] = { { ATA_SHIFT_PIO, ATA_BITS_PIO, XFER_PIO_0 }, @@ -989,9 +989,7 @@ ata_exec_internal(struct ata_port *ap, struct ata_device *dev, qc->private_data = &wait; qc->complete_fn = ata_qc_complete_internal; - qc->err_mask = ata_qc_issue(qc); - if (qc->err_mask) - ata_qc_complete(qc); + ata_qc_issue(qc); spin_unlock_irqrestore(&ap->host_set->lock, flags); @@ -3997,15 +3995,14 @@ static inline int ata_should_dma_map(struct ata_queued_cmd *qc) * * LOCKING: * spin_lock_irqsave(host_set lock) - * - * RETURNS: - * Zero on success, AC_ERR_* mask on failure */ - -unsigned int ata_qc_issue(struct ata_queued_cmd *qc) +void ata_qc_issue(struct ata_queued_cmd *qc) { struct ata_port *ap = qc->ap; + qc->ap->active_tag = qc->tag; + qc->flags |= ATA_QCFLAG_ACTIVE; + if (ata_should_dma_map(qc)) { if (qc->flags & ATA_QCFLAG_SG) { if (ata_sg_setup(qc)) @@ -4020,17 +4017,18 @@ unsigned int ata_qc_issue(struct ata_queued_cmd *qc) ap->ops->qc_prep(qc); - qc->ap->active_tag = qc->tag; - qc->flags |= ATA_QCFLAG_ACTIVE; - - return ap->ops->qc_issue(qc); + qc->err_mask |= ap->ops->qc_issue(qc); + if (unlikely(qc->err_mask)) + goto err; + return; sg_err: qc->flags &= ~ATA_QCFLAG_DMAMAP; - return AC_ERR_SYSTEM; + qc->err_mask |= AC_ERR_SYSTEM; +err: + ata_qc_complete(qc); } - /** * ata_qc_issue_prot - issue taskfile to device in proto-dependent manner * @qc: command to issue to device diff --git a/trunk/drivers/scsi/libata-scsi.c b/trunk/drivers/scsi/libata-scsi.c index 628191bfd990..53f5b0d9161c 100644 --- a/trunk/drivers/scsi/libata-scsi.c +++ b/trunk/drivers/scsi/libata-scsi.c @@ -1431,9 +1431,7 @@ static void ata_scsi_translate(struct ata_port *ap, struct ata_device *dev, goto early_finish; /* select device, send command to hardware */ - qc->err_mask = ata_qc_issue(qc); - if (qc->err_mask) - ata_qc_complete(qc); + ata_qc_issue(qc); VPRINTK("EXIT\n"); return; @@ -2199,9 +2197,7 @@ static void atapi_request_sense(struct ata_queued_cmd *qc) qc->complete_fn = atapi_sense_complete; - qc->err_mask = ata_qc_issue(qc); - if (qc->err_mask) - ata_qc_complete(qc); + ata_qc_issue(qc); DPRINTK("EXIT\n"); } diff --git a/trunk/drivers/scsi/libata.h b/trunk/drivers/scsi/libata.h index 65f52beea884..1c755b14521a 100644 --- a/trunk/drivers/scsi/libata.h +++ b/trunk/drivers/scsi/libata.h @@ -47,7 +47,7 @@ extern struct ata_queued_cmd *ata_qc_new_init(struct ata_port *ap, extern int ata_rwcmd_protocol(struct ata_queued_cmd *qc); extern void ata_port_flush_task(struct ata_port *ap); extern void ata_qc_free(struct ata_queued_cmd *qc); -extern unsigned int ata_qc_issue(struct ata_queued_cmd *qc); +extern void ata_qc_issue(struct ata_queued_cmd *qc); extern int ata_check_atapi_dma(struct ata_queued_cmd *qc); extern void ata_dev_select(struct ata_port *ap, unsigned int device, unsigned int wait, unsigned int can_sleep); diff --git a/trunk/drivers/serial/Kconfig b/trunk/drivers/serial/Kconfig index fe0d8b8e91c8..7d22dc0478d3 100644 --- a/trunk/drivers/serial/Kconfig +++ b/trunk/drivers/serial/Kconfig @@ -63,6 +63,33 @@ config SERIAL_8250_CONSOLE If unsure, say N. +config SERIAL_8250_GSC + tristate + depends on SERIAL_8250 && GSC + default SERIAL_8250 + +config SERIAL_8250_PCI + tristate "8250/16550 PCI device support" if EMBEDDED + depends on SERIAL_8250 && PCI + default SERIAL_8250 + help + This builds standard PCI serial support. You may be able to + disable this feature if you only need legacy serial support. + Saves about 9K. + +config SERIAL_8250_PNP + tristate "8250/16550 PNP device support" if EMBEDDED + depends on SERIAL_8250 && PNP + default SERIAL_8250 + help + This builds standard PNP serial support. You may be able to + disable this feature if you only need legacy serial support. + +config SERIAL_8250_HP300 + tristate + depends on SERIAL_8250 && HP300 + default SERIAL_8250 + config SERIAL_8250_CS tristate "8250/16550 PCMCIA device support" depends on PCMCIA && SERIAL_8250 diff --git a/trunk/drivers/serial/Makefile b/trunk/drivers/serial/Makefile index d2b4c214876b..0a71bf68a03f 100644 --- a/trunk/drivers/serial/Makefile +++ b/trunk/drivers/serial/Makefile @@ -4,15 +4,13 @@ # $Id: Makefile,v 1.8 2002/07/21 21:32:30 rmk Exp $ # -serial-8250-y := -serial-8250-$(CONFIG_PNP) += 8250_pnp.o -serial-8250-$(CONFIG_GSC) += 8250_gsc.o -serial-8250-$(CONFIG_PCI) += 8250_pci.o -serial-8250-$(CONFIG_HP300) += 8250_hp300.o - obj-$(CONFIG_SERIAL_CORE) += serial_core.o obj-$(CONFIG_SERIAL_21285) += 21285.o -obj-$(CONFIG_SERIAL_8250) += 8250.o $(serial-8250-y) +obj-$(CONFIG_SERIAL_8250) += 8250.o +obj-$(CONFIG_SERIAL_8250_PNP) += 8250_pnp.o +obj-$(CONFIG_SERIAL_8250_GSC) += 8250_gsc.o +obj-$(CONFIG_SERIAL_8250_PCI) += 8250_pci.o +obj-$(CONFIG_SERIAL_8250_HP300) += 8250_hp300.o obj-$(CONFIG_SERIAL_8250_CS) += serial_cs.o obj-$(CONFIG_SERIAL_8250_ACORN) += 8250_acorn.o obj-$(CONFIG_SERIAL_8250_CONSOLE) += 8250_early.o diff --git a/trunk/drivers/serial/jsm/jsm_tty.c b/trunk/drivers/serial/jsm/jsm_tty.c index 4d48b625cd3d..7d823705193c 100644 --- a/trunk/drivers/serial/jsm/jsm_tty.c +++ b/trunk/drivers/serial/jsm/jsm_tty.c @@ -142,12 +142,14 @@ static void jsm_tty_send_xchar(struct uart_port *port, char ch) { unsigned long lock_flags; struct jsm_channel *channel = (struct jsm_channel *)port; + struct termios *termios; spin_lock_irqsave(&port->lock, lock_flags); - if (ch == port->info->tty->termios->c_cc[VSTART]) + termios = port->info->tty->termios; + if (ch == termios->c_cc[VSTART]) channel->ch_bd->bd_ops->send_start_character(channel); - if (ch == port->info->tty->termios->c_cc[VSTOP]) + if (ch == termios->c_cc[VSTOP]) channel->ch_bd->bd_ops->send_stop_character(channel); spin_unlock_irqrestore(&port->lock, lock_flags); } @@ -178,6 +180,7 @@ static int jsm_tty_open(struct uart_port *port) struct jsm_board *brd; int rc = 0; struct jsm_channel *channel = (struct jsm_channel *)port; + struct termios *termios; /* Get board pointer from our array of majors we have allocated */ brd = channel->ch_bd; @@ -239,12 +242,13 @@ static int jsm_tty_open(struct uart_port *port) channel->ch_cached_lsr = 0; channel->ch_stops_sent = 0; - channel->ch_c_cflag = port->info->tty->termios->c_cflag; - channel->ch_c_iflag = port->info->tty->termios->c_iflag; - channel->ch_c_oflag = port->info->tty->termios->c_oflag; - channel->ch_c_lflag = port->info->tty->termios->c_lflag; - channel->ch_startc = port->info->tty->termios->c_cc[VSTART]; - channel->ch_stopc = port->info->tty->termios->c_cc[VSTOP]; + termios = port->info->tty->termios; + channel->ch_c_cflag = termios->c_cflag; + channel->ch_c_iflag = termios->c_iflag; + channel->ch_c_oflag = termios->c_oflag; + channel->ch_c_lflag = termios->c_lflag; + channel->ch_startc = termios->c_cc[VSTART]; + channel->ch_stopc = termios->c_cc[VSTOP]; /* Tell UART to init itself */ brd->bd_ops->uart_init(channel); @@ -784,6 +788,7 @@ static void jsm_carrier(struct jsm_channel *ch) void jsm_check_queue_flow_control(struct jsm_channel *ch) { + struct board_ops *bd_ops = ch->ch_bd->bd_ops; int qleft = 0; /* Store how much space we have left in the queue */ @@ -809,7 +814,7 @@ void jsm_check_queue_flow_control(struct jsm_channel *ch) /* HWFLOW */ if (ch->ch_c_cflag & CRTSCTS) { if(!(ch->ch_flags & CH_RECEIVER_OFF)) { - ch->ch_bd->bd_ops->disable_receiver(ch); + bd_ops->disable_receiver(ch); ch->ch_flags |= (CH_RECEIVER_OFF); jsm_printk(READ, INFO, &ch->ch_bd->pci_dev, "Internal queue hit hilevel mark (%d)! Turning off interrupts.\n", @@ -819,7 +824,7 @@ void jsm_check_queue_flow_control(struct jsm_channel *ch) /* SWFLOW */ else if (ch->ch_c_iflag & IXOFF) { if (ch->ch_stops_sent <= MAX_STOPS_SENT) { - ch->ch_bd->bd_ops->send_stop_character(ch); + bd_ops->send_stop_character(ch); ch->ch_stops_sent++; jsm_printk(READ, INFO, &ch->ch_bd->pci_dev, "Sending stop char! Times sent: %x\n", ch->ch_stops_sent); @@ -846,7 +851,7 @@ void jsm_check_queue_flow_control(struct jsm_channel *ch) /* HWFLOW */ if (ch->ch_c_cflag & CRTSCTS) { if (ch->ch_flags & CH_RECEIVER_OFF) { - ch->ch_bd->bd_ops->enable_receiver(ch); + bd_ops->enable_receiver(ch); ch->ch_flags &= ~(CH_RECEIVER_OFF); jsm_printk(READ, INFO, &ch->ch_bd->pci_dev, "Internal queue hit lowlevel mark (%d)! Turning on interrupts.\n", @@ -856,7 +861,7 @@ void jsm_check_queue_flow_control(struct jsm_channel *ch) /* SWFLOW */ else if (ch->ch_c_iflag & IXOFF && ch->ch_stops_sent) { ch->ch_stops_sent = 0; - ch->ch_bd->bd_ops->send_start_character(ch); + bd_ops->send_start_character(ch); jsm_printk(READ, INFO, &ch->ch_bd->pci_dev, "Sending start char!\n"); } } diff --git a/trunk/drivers/usb/input/hid-input.c b/trunk/drivers/usb/input/hid-input.c index cb0d80f49252..25bc85f8ce39 100644 --- a/trunk/drivers/usb/input/hid-input.c +++ b/trunk/drivers/usb/input/hid-input.c @@ -510,7 +510,7 @@ static void hidinput_configure_usage(struct hid_input *hidinput, struct hid_fiel case 0x025: map_key_clear(KEY_TV); break; case 0x026: map_key_clear(KEY_MENU); break; case 0x031: map_key_clear(KEY_AUDIO); break; - case 0x032: map_key_clear(KEY_SUBTITLE); break; + case 0x032: map_key_clear(KEY_TEXT); break; case 0x033: map_key_clear(KEY_LAST); break; case 0x047: map_key_clear(KEY_MP3); break; case 0x048: map_key_clear(KEY_DVD); break; diff --git a/trunk/drivers/video/Kconfig b/trunk/drivers/video/Kconfig index 22e9d696fdd2..f87c0171f4ec 100644 --- a/trunk/drivers/video/Kconfig +++ b/trunk/drivers/video/Kconfig @@ -904,18 +904,6 @@ config FB_MATROX_MULTIHEAD There is no need for enabling 'Matrox multihead support' if you have only one Matrox card in the box. -config FB_RADEON_OLD - tristate "ATI Radeon display support (Old driver)" - depends on FB && PCI - select FB_CFB_FILLRECT - select FB_CFB_COPYAREA - select FB_CFB_IMAGEBLIT - select FB_MACMODES if PPC - help - Choose this option if you want to use an ATI Radeon graphics card as - a framebuffer device. There are both PCI and AGP versions. You - don't need to choose this to run the Radeon in plain VGA mode. - config FB_RADEON tristate "ATI Radeon display support" depends on FB && PCI diff --git a/trunk/drivers/video/Makefile b/trunk/drivers/video/Makefile index cb90218515ac..23de3b2c7856 100644 --- a/trunk/drivers/video/Makefile +++ b/trunk/drivers/video/Makefile @@ -39,7 +39,6 @@ obj-$(CONFIG_FB_KYRO) += kyro/ obj-$(CONFIG_FB_SAVAGE) += savage/ obj-$(CONFIG_FB_GEODE) += geode/ obj-$(CONFIG_FB_I810) += vgastate.o -obj-$(CONFIG_FB_RADEON_OLD) += radeonfb.o obj-$(CONFIG_FB_NEOMAGIC) += neofb.o vgastate.o obj-$(CONFIG_FB_VIRGE) += virgefb.o obj-$(CONFIG_FB_3DFX) += tdfxfb.o diff --git a/trunk/drivers/video/backlight/Kconfig b/trunk/drivers/video/backlight/Kconfig index 9d996f2c10d5..b895eaaa73fd 100644 --- a/trunk/drivers/video/backlight/Kconfig +++ b/trunk/drivers/video/backlight/Kconfig @@ -43,11 +43,11 @@ config LCD_DEVICE default y config BACKLIGHT_CORGI - tristate "Sharp Corgi Backlight Driver (SL-C7xx Series)" + tristate "Sharp Corgi Backlight Driver (SL Series)" depends on BACKLIGHT_DEVICE && PXA_SHARPSL default y help - If you have a Sharp Zaurus SL-C7xx, say y to enable the + If you have a Sharp Zaurus SL-C7xx, SL-Cxx00 or SL-6000x say y to enable the backlight driver. config BACKLIGHT_HP680 diff --git a/trunk/drivers/video/backlight/backlight.c b/trunk/drivers/video/backlight/backlight.c index 151fda8dded0..334b1db1bd7c 100644 --- a/trunk/drivers/video/backlight/backlight.c +++ b/trunk/drivers/video/backlight/backlight.c @@ -16,14 +16,12 @@ static ssize_t backlight_show_power(struct class_device *cdev, char *buf) { - int rc; + int rc = -ENXIO; struct backlight_device *bd = to_backlight_device(cdev); down(&bd->sem); - if (likely(bd->props && bd->props->get_power)) - rc = sprintf(buf, "%d\n", bd->props->get_power(bd)); - else - rc = -ENXIO; + if (likely(bd->props)) + rc = sprintf(buf, "%d\n", bd->props->power); up(&bd->sem); return rc; @@ -31,7 +29,7 @@ static ssize_t backlight_show_power(struct class_device *cdev, char *buf) static ssize_t backlight_store_power(struct class_device *cdev, const char *buf, size_t count) { - int rc, power; + int rc = -ENXIO, power; char *endp; struct backlight_device *bd = to_backlight_device(cdev); @@ -40,12 +38,13 @@ static ssize_t backlight_store_power(struct class_device *cdev, const char *buf, return -EINVAL; down(&bd->sem); - if (likely(bd->props && bd->props->set_power)) { + if (likely(bd->props)) { pr_debug("backlight: set power to %d\n", power); - bd->props->set_power(bd, power); + bd->props->power = power; + if (likely(bd->props->update_status)) + bd->props->update_status(bd); rc = count; - } else - rc = -ENXIO; + } up(&bd->sem); return rc; @@ -53,14 +52,12 @@ static ssize_t backlight_store_power(struct class_device *cdev, const char *buf, static ssize_t backlight_show_brightness(struct class_device *cdev, char *buf) { - int rc; + int rc = -ENXIO; struct backlight_device *bd = to_backlight_device(cdev); down(&bd->sem); - if (likely(bd->props && bd->props->get_brightness)) - rc = sprintf(buf, "%d\n", bd->props->get_brightness(bd)); - else - rc = -ENXIO; + if (likely(bd->props)) + rc = sprintf(buf, "%d\n", bd->props->brightness); up(&bd->sem); return rc; @@ -68,7 +65,7 @@ static ssize_t backlight_show_brightness(struct class_device *cdev, char *buf) static ssize_t backlight_store_brightness(struct class_device *cdev, const char *buf, size_t count) { - int rc, brightness; + int rc = -ENXIO, brightness; char *endp; struct backlight_device *bd = to_backlight_device(cdev); @@ -77,12 +74,18 @@ static ssize_t backlight_store_brightness(struct class_device *cdev, const char return -EINVAL; down(&bd->sem); - if (likely(bd->props && bd->props->set_brightness)) { - pr_debug("backlight: set brightness to %d\n", brightness); - bd->props->set_brightness(bd, brightness); - rc = count; - } else - rc = -ENXIO; + if (likely(bd->props)) { + if (brightness > bd->props->max_brightness) + rc = -EINVAL; + else { + pr_debug("backlight: set brightness to %d\n", + brightness); + bd->props->brightness = brightness; + if (likely(bd->props->update_status)) + bd->props->update_status(bd); + rc = count; + } + } up(&bd->sem); return rc; @@ -90,14 +93,26 @@ static ssize_t backlight_store_brightness(struct class_device *cdev, const char static ssize_t backlight_show_max_brightness(struct class_device *cdev, char *buf) { - int rc; + int rc = -ENXIO; struct backlight_device *bd = to_backlight_device(cdev); down(&bd->sem); if (likely(bd->props)) rc = sprintf(buf, "%d\n", bd->props->max_brightness); - else - rc = -ENXIO; + up(&bd->sem); + + return rc; +} + +static ssize_t backlight_show_actual_brightness(struct class_device *cdev, + char *buf) +{ + int rc = -ENXIO; + struct backlight_device *bd = to_backlight_device(cdev); + + down(&bd->sem); + if (likely(bd->props && bd->props->get_brightness)) + rc = sprintf(buf, "%d\n", bd->props->get_brightness(bd)); up(&bd->sem); return rc; @@ -123,7 +138,10 @@ static struct class backlight_class = { static struct class_device_attribute bl_class_device_attributes[] = { DECLARE_ATTR(power, 0644, backlight_show_power, backlight_store_power), - DECLARE_ATTR(brightness, 0644, backlight_show_brightness, backlight_store_brightness), + DECLARE_ATTR(brightness, 0644, backlight_show_brightness, + backlight_store_brightness), + DECLARE_ATTR(actual_brightness, 0444, backlight_show_actual_brightness, + NULL), DECLARE_ATTR(max_brightness, 0444, backlight_show_max_brightness, NULL), }; @@ -144,8 +162,12 @@ static int fb_notifier_callback(struct notifier_block *self, bd = container_of(self, struct backlight_device, fb_notif); down(&bd->sem); if (bd->props) - if (!bd->props->check_fb || bd->props->check_fb(evdata->info)) - bd->props->set_power(bd, *(int *)evdata->data); + if (!bd->props->check_fb || + bd->props->check_fb(evdata->info)) { + bd->props->fb_blank = *(int *)evdata->data; + if (likely(bd->props && bd->props->update_status)) + bd->props->update_status(bd); + } up(&bd->sem); return 0; } @@ -231,6 +253,12 @@ void backlight_device_unregister(struct backlight_device *bd) &bl_class_device_attributes[i]); down(&bd->sem); + if (likely(bd->props && bd->props->update_status)) { + bd->props->brightness = 0; + bd->props->power = 0; + bd->props->update_status(bd); + } + bd->props = NULL; up(&bd->sem); diff --git a/trunk/drivers/video/backlight/corgi_bl.c b/trunk/drivers/video/backlight/corgi_bl.c index d0aaf450e8c7..2ebbfd95145f 100644 --- a/trunk/drivers/video/backlight/corgi_bl.c +++ b/trunk/drivers/video/backlight/corgi_bl.c @@ -1,7 +1,7 @@ /* - * Backlight Driver for Sharp Corgi + * Backlight Driver for Sharp Zaurus Handhelds (various models) * - * Copyright (c) 2004-2005 Richard Purdie + * Copyright (c) 2004-2006 Richard Purdie * * Based on Sharp's 2.4 Backlight Driver * @@ -15,80 +15,63 @@ #include #include #include -#include +#include #include #include - #include #include -#define CORGI_DEFAULT_INTENSITY 0x1f -#define CORGI_LIMIT_MASK 0x0b - -static int corgibl_powermode = FB_BLANK_UNBLANK; -static int current_intensity = 0; -static int corgibl_limit = 0; -static void (*corgibl_mach_set_intensity)(int intensity); -static spinlock_t bl_lock = SPIN_LOCK_UNLOCKED; +static int corgibl_intensity; +static DEFINE_MUTEX(bl_mutex); static struct backlight_properties corgibl_data; +static struct backlight_device *corgi_backlight_device; +static struct corgibl_machinfo *bl_machinfo; -static void corgibl_send_intensity(int intensity) +static unsigned long corgibl_flags; +#define CORGIBL_SUSPENDED 0x01 +#define CORGIBL_BATTLOW 0x02 + +static int corgibl_send_intensity(struct backlight_device *bd) { - unsigned long flags; void (*corgi_kick_batt)(void); + int intensity = bd->props->brightness; - if (corgibl_powermode != FB_BLANK_UNBLANK) { + if (bd->props->power != FB_BLANK_UNBLANK) intensity = 0; - } else { - if (corgibl_limit) - intensity &= CORGI_LIMIT_MASK; - } - - spin_lock_irqsave(&bl_lock, flags); + if (bd->props->fb_blank != FB_BLANK_UNBLANK) + intensity = 0; + if (corgibl_flags & CORGIBL_SUSPENDED) + intensity = 0; + if (corgibl_flags & CORGIBL_BATTLOW) + intensity &= bl_machinfo->limit_mask; - corgibl_mach_set_intensity(intensity); + mutex_lock(&bl_mutex); + bl_machinfo->set_bl_intensity(intensity); + mutex_unlock(&bl_mutex); - spin_unlock_irqrestore(&bl_lock, flags); + corgibl_intensity = intensity; corgi_kick_batt = symbol_get(sharpsl_battery_kick); if (corgi_kick_batt) { corgi_kick_batt(); symbol_put(sharpsl_battery_kick); } -} -static void corgibl_blank(int blank) -{ - switch(blank) { - - case FB_BLANK_NORMAL: - case FB_BLANK_VSYNC_SUSPEND: - case FB_BLANK_HSYNC_SUSPEND: - case FB_BLANK_POWERDOWN: - if (corgibl_powermode == FB_BLANK_UNBLANK) { - corgibl_send_intensity(0); - corgibl_powermode = blank; - } - break; - case FB_BLANK_UNBLANK: - if (corgibl_powermode != FB_BLANK_UNBLANK) { - corgibl_powermode = blank; - corgibl_send_intensity(current_intensity); - } - break; - } + return 0; } #ifdef CONFIG_PM static int corgibl_suspend(struct platform_device *dev, pm_message_t state) { - corgibl_blank(FB_BLANK_POWERDOWN); + corgibl_flags |= CORGIBL_SUSPENDED; + corgibl_send_intensity(corgi_backlight_device); return 0; } static int corgibl_resume(struct platform_device *dev) { - corgibl_blank(FB_BLANK_UNBLANK); + corgibl_flags &= ~CORGIBL_SUSPENDED; + corgibl_send_intensity(corgi_backlight_device); return 0; } #else @@ -96,68 +79,55 @@ static int corgibl_resume(struct platform_device *dev) #define corgibl_resume NULL #endif - -static int corgibl_set_power(struct backlight_device *bd, int state) -{ - corgibl_blank(state); - return 0; -} - -static int corgibl_get_power(struct backlight_device *bd) +static int corgibl_get_intensity(struct backlight_device *bd) { - return corgibl_powermode; + return corgibl_intensity; } -static int corgibl_set_intensity(struct backlight_device *bd, int intensity) +static int corgibl_set_intensity(struct backlight_device *bd) { - if (intensity > corgibl_data.max_brightness) - intensity = corgibl_data.max_brightness; - corgibl_send_intensity(intensity); - current_intensity=intensity; + corgibl_send_intensity(corgi_backlight_device); return 0; } -static int corgibl_get_intensity(struct backlight_device *bd) -{ - return current_intensity; -} - /* * Called when the battery is low to limit the backlight intensity. * If limit==0 clear any limit, otherwise limit the intensity */ void corgibl_limit_intensity(int limit) { - corgibl_limit = (limit ? 1 : 0); - corgibl_send_intensity(current_intensity); + if (limit) + corgibl_flags |= CORGIBL_BATTLOW; + else + corgibl_flags &= ~CORGIBL_BATTLOW; + corgibl_send_intensity(corgi_backlight_device); } EXPORT_SYMBOL(corgibl_limit_intensity); static struct backlight_properties corgibl_data = { - .owner = THIS_MODULE, - .get_power = corgibl_get_power, - .set_power = corgibl_set_power, + .owner = THIS_MODULE, .get_brightness = corgibl_get_intensity, - .set_brightness = corgibl_set_intensity, + .update_status = corgibl_set_intensity, }; -static struct backlight_device *corgi_backlight_device; - static int __init corgibl_probe(struct platform_device *pdev) { struct corgibl_machinfo *machinfo = pdev->dev.platform_data; + bl_machinfo = machinfo; corgibl_data.max_brightness = machinfo->max_intensity; - corgibl_mach_set_intensity = machinfo->set_bl_intensity; + if (!machinfo->limit_mask) + machinfo->limit_mask = -1; corgi_backlight_device = backlight_device_register ("corgi-bl", NULL, &corgibl_data); if (IS_ERR (corgi_backlight_device)) return PTR_ERR (corgi_backlight_device); - corgibl_set_intensity(NULL, CORGI_DEFAULT_INTENSITY); - corgibl_limit_intensity(0); + corgibl_data.power = FB_BLANK_UNBLANK; + corgibl_data.brightness = machinfo->default_intensity; + corgibl_send_intensity(corgi_backlight_device); printk("Corgi Backlight Driver Initialized.\n"); return 0; @@ -167,8 +137,6 @@ static int corgibl_remove(struct platform_device *dev) { backlight_device_unregister(corgi_backlight_device); - corgibl_set_intensity(NULL, 0); - printk("Corgi Backlight Driver Unloaded\n"); return 0; } diff --git a/trunk/drivers/video/backlight/hp680_bl.c b/trunk/drivers/video/backlight/hp680_bl.c index 95da4c9ed1f1..a71e984c93d4 100644 --- a/trunk/drivers/video/backlight/hp680_bl.c +++ b/trunk/drivers/video/backlight/hp680_bl.c @@ -13,7 +13,7 @@ #include #include #include -#include +#include #include #include #include @@ -25,66 +25,58 @@ #define HP680_MAX_INTENSITY 255 #define HP680_DEFAULT_INTENSITY 10 -static int hp680bl_powermode = FB_BLANK_UNBLANK; +static int hp680bl_suspended; static int current_intensity = 0; static spinlock_t bl_lock = SPIN_LOCK_UNLOCKED; +static struct backlight_device *hp680_backlight_device; -static void hp680bl_send_intensity(int intensity) +static void hp680bl_send_intensity(struct backlight_device *bd) { unsigned long flags; + u16 v; + int intensity = bd->props->brightness; - if (hp680bl_powermode != FB_BLANK_UNBLANK) + if (bd->props->power != FB_BLANK_UNBLANK) + intensity = 0; + if (bd->props->fb_blank != FB_BLANK_UNBLANK) + intensity = 0; + if (hp680bl_suspended) intensity = 0; spin_lock_irqsave(&bl_lock, flags); - sh_dac_output(255-(u8)intensity, DAC_LCD_BRIGHTNESS); + if (intensity && current_intensity == 0) { + sh_dac_enable(DAC_LCD_BRIGHTNESS); + v = inw(HD64461_GPBDR); + v &= ~HD64461_GPBDR_LCDOFF; + outw(v, HD64461_GPBDR); + sh_dac_output(255-(u8)intensity, DAC_LCD_BRIGHTNESS); + } else if (intensity == 0 && current_intensity != 0) { + sh_dac_output(255-(u8)intensity, DAC_LCD_BRIGHTNESS); + sh_dac_disable(DAC_LCD_BRIGHTNESS); + v = inw(HD64461_GPBDR); + v |= HD64461_GPBDR_LCDOFF; + outw(v, HD64461_GPBDR); + } else if (intensity) { + sh_dac_output(255-(u8)intensity, DAC_LCD_BRIGHTNESS); + } spin_unlock_irqrestore(&bl_lock, flags); -} -static void hp680bl_blank(int blank) -{ - u16 v; - - switch(blank) { - - case FB_BLANK_NORMAL: - case FB_BLANK_VSYNC_SUSPEND: - case FB_BLANK_HSYNC_SUSPEND: - case FB_BLANK_POWERDOWN: - if (hp680bl_powermode == FB_BLANK_UNBLANK) { - hp680bl_send_intensity(0); - hp680bl_powermode = blank; - sh_dac_disable(DAC_LCD_BRIGHTNESS); - v = inw(HD64461_GPBDR); - v |= HD64461_GPBDR_LCDOFF; - outw(v, HD64461_GPBDR); - } - break; - case FB_BLANK_UNBLANK: - if (hp680bl_powermode != FB_BLANK_UNBLANK) { - sh_dac_enable(DAC_LCD_BRIGHTNESS); - v = inw(HD64461_GPBDR); - v &= ~HD64461_GPBDR_LCDOFF; - outw(v, HD64461_GPBDR); - hp680bl_powermode = blank; - hp680bl_send_intensity(current_intensity); - } - break; - } + current_intensity = intensity; } + #ifdef CONFIG_PM -static int hp680bl_suspend(struct device *dev, pm_message_t state, u32 level) +static int hp680bl_suspend(struct platform_device *dev, pm_message_t state) { - if (level == SUSPEND_POWER_DOWN) - hp680bl_blank(FB_BLANK_POWERDOWN); + hp680bl_suspended = 1; + hp680bl_send_intensity(hp680_backlight_device); return 0; } -static int hp680bl_resume(struct device *dev, u32 level) +static int hp680bl_resume(struct platform_device *dev) { - if (level == RESUME_POWER_ON) - hp680bl_blank(FB_BLANK_UNBLANK); + hp680bl_suspended = 0; + hp680bl_send_intensity(hp680_backlight_device); return 0; } #else @@ -92,24 +84,9 @@ static int hp680bl_resume(struct device *dev, u32 level) #define hp680bl_resume NULL #endif - -static int hp680bl_set_power(struct backlight_device *bd, int state) +static int hp680bl_set_intensity(struct backlight_device *bd) { - hp680bl_blank(state); - return 0; -} - -static int hp680bl_get_power(struct backlight_device *bd) -{ - return hp680bl_powermode; -} - -static int hp680bl_set_intensity(struct backlight_device *bd, int intensity) -{ - if (intensity > HP680_MAX_INTENSITY) - intensity = HP680_MAX_INTENSITY; - hp680bl_send_intensity(intensity); - current_intensity = intensity; + hp680bl_send_intensity(bd); return 0; } @@ -120,65 +97,67 @@ static int hp680bl_get_intensity(struct backlight_device *bd) static struct backlight_properties hp680bl_data = { .owner = THIS_MODULE, - .get_power = hp680bl_get_power, - .set_power = hp680bl_set_power, .max_brightness = HP680_MAX_INTENSITY, .get_brightness = hp680bl_get_intensity, - .set_brightness = hp680bl_set_intensity, + .update_status = hp680bl_set_intensity, }; -static struct backlight_device *hp680_backlight_device; - -static int __init hp680bl_probe(struct device *dev) +static int __init hp680bl_probe(struct platform_device *dev) { hp680_backlight_device = backlight_device_register ("hp680-bl", NULL, &hp680bl_data); if (IS_ERR (hp680_backlight_device)) return PTR_ERR (hp680_backlight_device); - hp680bl_set_intensity(NULL, HP680_DEFAULT_INTENSITY); + hp680_backlight_device->props->brightness = HP680_DEFAULT_INTENSITY; + hp680bl_send_intensity(hp680_backlight_device); return 0; } -static int hp680bl_remove(struct device *dev) +static int hp680bl_remove(struct platform_device *dev) { backlight_device_unregister(hp680_backlight_device); return 0; } -static struct device_driver hp680bl_driver = { - .name = "hp680-bl", - .bus = &platform_bus_type, +static struct platform_driver hp680bl_driver = { .probe = hp680bl_probe, .remove = hp680bl_remove, .suspend = hp680bl_suspend, .resume = hp680bl_resume, + .driver = { + .name = "hp680-bl", + }, }; -static struct platform_device hp680bl_device = { - .name = "hp680-bl", - .id = -1, -}; +static struct platform_device *hp680bl_device; static int __init hp680bl_init(void) { int ret; - ret=driver_register(&hp680bl_driver); + ret = platform_driver_register(&hp680bl_driver); if (!ret) { - ret = platform_device_register(&hp680bl_device); - if (ret) - driver_unregister(&hp680bl_driver); + hp680bl_device = platform_device_alloc("hp680-bl", -1); + if (!hp680bl_device) + return -ENOMEM; + + ret = platform_device_add(hp680bl_device); + + if (ret) { + platform_device_put(hp680bl_device); + platform_driver_unregister(&hp680bl_driver); + } } return ret; } static void __exit hp680bl_exit(void) { - platform_device_unregister(&hp680bl_device); - driver_unregister(&hp680bl_driver); + platform_device_unregister(hp680bl_device); + platform_driver_unregister(&hp680bl_driver); } module_init(hp680bl_init); diff --git a/trunk/drivers/video/cfbimgblt.c b/trunk/drivers/video/cfbimgblt.c index 910e2338a27e..8ba6152db2fd 100644 --- a/trunk/drivers/video/cfbimgblt.c +++ b/trunk/drivers/video/cfbimgblt.c @@ -169,7 +169,7 @@ static inline void slow_imageblit(const struct fb_image *image, struct fb_info * while (j--) { l--; - color = (*s & 1 << (FB_BIT_NR(l))) ? fgcolor : bgcolor; + color = (*s & (1 << l)) ? fgcolor : bgcolor; val |= FB_SHIFT_HIGH(color, shift); /* Did the bitshift spill bits to the next long? */ diff --git a/trunk/drivers/video/console/fbcon.c b/trunk/drivers/video/console/fbcon.c index 041d06987861..ca020719d20b 100644 --- a/trunk/drivers/video/console/fbcon.c +++ b/trunk/drivers/video/console/fbcon.c @@ -466,7 +466,7 @@ static int __init fb_console_setup(char *this_opt) int i, j; if (!this_opt || !*this_opt) - return 0; + return 1; while ((options = strsep(&this_opt, ",")) != NULL) { if (!strncmp(options, "font:", 5)) @@ -481,10 +481,10 @@ static int __init fb_console_setup(char *this_opt) options++; } if (*options != ',') - return 0; + return 1; options++; } else - return 0; + return 1; } if (!strncmp(options, "map:", 4)) { @@ -496,7 +496,7 @@ static int __init fb_console_setup(char *this_opt) con2fb_map_boot[i] = (options[j++]-'0') % FB_MAX; } - return 0; + return 1; } if (!strncmp(options, "vc:", 3)) { @@ -518,7 +518,7 @@ static int __init fb_console_setup(char *this_opt) rotate = 0; } } - return 0; + return 1; } __setup("fbcon=", fb_console_setup); @@ -1142,6 +1142,7 @@ static void fbcon_init(struct vc_data *vc, int init) set_blitting_type(vc, info); } + ops->p = &fb_display[fg_console]; } static void fbcon_deinit(struct vc_data *vc) diff --git a/trunk/drivers/video/console/sticore.c b/trunk/drivers/video/console/sticore.c index d6041e781aca..74ac2acaf72c 100644 --- a/trunk/drivers/video/console/sticore.c +++ b/trunk/drivers/video/console/sticore.c @@ -275,7 +275,7 @@ static int __init sti_setup(char *str) if (str) strlcpy (default_sti_path, str, sizeof (default_sti_path)); - return 0; + return 1; } /* Assuming the machine has multiple STI consoles (=graphic cards) which @@ -321,7 +321,7 @@ static int __init sti_font_setup(char *str) i++; } - return 0; + return 1; } /* The optional linux kernel parameter "sti_font" defines which font diff --git a/trunk/drivers/video/fbmem.c b/trunk/drivers/video/fbmem.c index b1a8dca76430..944855b3e4af 100644 --- a/trunk/drivers/video/fbmem.c +++ b/trunk/drivers/video/fbmem.c @@ -1588,7 +1588,7 @@ static int __init video_setup(char *options) } } - return 0; + return 1; } __setup("video=", video_setup); #endif diff --git a/trunk/drivers/video/pxafb.c b/trunk/drivers/video/pxafb.c index 53ad61f1038c..809fc5eefc15 100644 --- a/trunk/drivers/video/pxafb.c +++ b/trunk/drivers/video/pxafb.c @@ -232,9 +232,9 @@ static int pxafb_check_var(struct fb_var_screeninfo *var, struct fb_info *info) if (var->yres < MIN_YRES) var->yres = MIN_YRES; if (var->xres > fbi->max_xres) - var->xres = fbi->max_xres; + return -EINVAL; if (var->yres > fbi->max_yres) - var->yres = fbi->max_yres; + return -EINVAL; var->xres_virtual = max(var->xres_virtual, var->xres); var->yres_virtual = @@ -781,7 +781,7 @@ static void pxafb_disable_controller(struct pxafb_info *fbi) LCCR0 &= ~LCCR0_LDM; /* Enable LCD Disable Done Interrupt */ LCCR0 |= LCCR0_DIS; /* Disable LCD Controller */ - schedule_timeout(20 * HZ / 1000); + schedule_timeout(200 * HZ / 1000); remove_wait_queue(&fbi->ctrlr_wait, &wait); /* disable LCD controller clock */ @@ -1274,7 +1274,7 @@ int __init pxafb_probe(struct platform_device *dev) struct pxafb_mach_info *inf; int ret; - dev_dbg(dev, "pxafb_probe\n"); + dev_dbg(&dev->dev, "pxafb_probe\n"); inf = dev->dev.platform_data; ret = -ENOMEM; diff --git a/trunk/drivers/video/radeonfb.c b/trunk/drivers/video/radeonfb.c deleted file mode 100644 index afb6c2ead599..000000000000 --- a/trunk/drivers/video/radeonfb.c +++ /dev/null @@ -1,3167 +0,0 @@ -/* - * drivers/video/radeonfb.c - * framebuffer driver for ATI Radeon chipset video boards - * - * Copyright 2000 Ani Joshi - * - * - * ChangeLog: - * 2000-08-03 initial version 0.0.1 - * 2000-09-10 more bug fixes, public release 0.0.5 - * 2001-02-19 mode bug fixes, 0.0.7 - * 2001-07-05 fixed scrolling issues, engine initialization, - * and minor mode tweaking, 0.0.9 - * 2001-09-07 Radeon VE support, Nick Kurshev - * blanking, pan_display, and cmap fixes, 0.1.0 - * 2001-10-10 Radeon 7500 and 8500 support, and experimental - * flat panel support, 0.1.1 - * 2001-11-17 Radeon M6 (ppc) support, Daniel Berlin, 0.1.2 - * 2001-11-18 DFP fixes, Kevin Hendricks, 0.1.3 - * 2001-11-29 more cmap, backlight fixes, Benjamin Herrenschmidt - * 2002-01-18 DFP panel detection via BIOS, Michael Clark, 0.1.4 - * 2002-06-02 console switching, mode set fixes, accel fixes - * 2002-06-03 MTRR support, Peter Horton, 0.1.5 - * 2002-09-21 rv250, r300, m9 initial support, - * added mirror option, 0.1.6 - * - * Special thanks to ATI DevRel team for their hardware donations. - * - */ - - -#define RADEON_VERSION "0.1.6" - - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#include -#include -#if defined(__powerpc__) -#include -#include -#include "macmodes.h" - -#ifdef CONFIG_NVRAM -#include -#endif - -#ifdef CONFIG_PMAC_BACKLIGHT -#include -#endif - -#ifdef CONFIG_BOOTX_TEXT -#include -#endif - -#ifdef CONFIG_ADB_PMU -#include -#include -#endif - -#endif /* __powerpc__ */ - -#ifdef CONFIG_MTRR -#include -#endif - -#include