.
+Many of the "generic" devices like HPET or IO APIC have the ce4100
+name in their compatible property because they first appeared in this
+SoC.
+
+The CPU node
+------------
+ cpu@0 {
+ device_type = "cpu";
+ compatible = "intel,ce4100";
+ reg = <0>;
+ lapic = <&lapic0>;
+ };
+
+The reg property describes the CPU number. The lapic property points to
+the local APIC timer.
+
+The SoC node
+------------
+
+This node describes the in-core peripherals. Required property:
+ compatible = "intel,ce4100-cp";
+
+The PCI node
+------------
+This node describes the PCI bus on the SoC. Its property should be
+ compatible = "intel,ce4100-pci", "pci";
+
+If the OS is using the IO-APIC for interrupt routing then the reported
+interrupt numbers for devices is no longer true. In order to obtain the
+correct interrupt number, the child node which represents the device has
+to contain the interrupt property. Besides the interrupt property it has
+to contain at least the reg property containing the PCI bus address and
+compatible property according to "PCI Bus Binding Revision 2.1".
diff --git a/trunk/Documentation/devicetree/bindings/x86/interrupt.txt b/trunk/Documentation/devicetree/bindings/x86/interrupt.txt
new file mode 100644
index 000000000000..7d19f494f19a
--- /dev/null
+++ b/trunk/Documentation/devicetree/bindings/x86/interrupt.txt
@@ -0,0 +1,26 @@
+Interrupt chips
+---------------
+
+* Intel I/O Advanced Programmable Interrupt Controller (IO APIC)
+
+ Required properties:
+ --------------------
+ compatible = "intel,ce4100-ioapic";
+ #interrupt-cells = <2>;
+
+ Device's interrupt property:
+
+ interrupts = ;
+
+ The first number (P) represents the interrupt pin which is wired to the
+ IO APIC. The second number (S) represents the sense of interrupt which
+ should be configured and can be one of:
+ 0 - Edge Rising
+ 1 - Level Low
+ 2 - Level High
+ 3 - Edge Falling
+
+* Local APIC
+ Required property:
+
+ compatible = "intel,ce4100-lapic";
diff --git a/trunk/Documentation/devicetree/bindings/x86/timer.txt b/trunk/Documentation/devicetree/bindings/x86/timer.txt
new file mode 100644
index 000000000000..c688af58e3bd
--- /dev/null
+++ b/trunk/Documentation/devicetree/bindings/x86/timer.txt
@@ -0,0 +1,6 @@
+Timers
+------
+
+* High Precision Event Timer (HPET)
+ Required property:
+ compatible = "intel,ce4100-hpet";
diff --git a/trunk/Documentation/devicetree/booting-without-of.txt b/trunk/Documentation/devicetree/booting-without-of.txt
index 28b1c9d3d351..55fd2623445b 100644
--- a/trunk/Documentation/devicetree/booting-without-of.txt
+++ b/trunk/Documentation/devicetree/booting-without-of.txt
@@ -13,6 +13,7 @@ Table of Contents
I - Introduction
1) Entry point for arch/powerpc
+ 2) Entry point for arch/x86
II - The DT block format
1) Header
@@ -225,6 +226,25 @@ it with special cases.
cannot support both configurations with Book E and configurations
with classic Powerpc architectures.
+2) Entry point for arch/x86
+-------------------------------
+
+ There is one single 32bit entry point to the kernel at code32_start,
+ the decompressor (the real mode entry point goes to the same 32bit
+ entry point once it switched into protected mode). That entry point
+ supports one calling convention which is documented in
+ Documentation/x86/boot.txt
+ The physical pointer to the device-tree block (defined in chapter II)
+ is passed via setup_data which requires at least boot protocol 2.09.
+ The type filed is defined as
+
+ #define SETUP_DTB 2
+
+ This device-tree is used as an extension to the "boot page". As such it
+ does not parse / consider data which is already covered by the boot
+ page. This includes memory size, reserved ranges, command line arguments
+ or initrd address. It simply holds information which can not be retrieved
+ otherwise like interrupt routing or a list of devices behind an I2C bus.
II - The DT block format
========================
diff --git a/trunk/Documentation/hwmon/jc42 b/trunk/Documentation/hwmon/jc42
index 0e76ef12e4c6..a22ecf48f255 100644
--- a/trunk/Documentation/hwmon/jc42
+++ b/trunk/Documentation/hwmon/jc42
@@ -51,7 +51,8 @@ Supported chips:
* JEDEC JC 42.4 compliant temperature sensor chips
Prefix: 'jc42'
Addresses scanned: I2C 0x18 - 0x1f
- Datasheet: -
+ Datasheet:
+ http://www.jedec.org/sites/default/files/docs/4_01_04R19.pdf
Author:
Guenter Roeck
@@ -60,7 +61,11 @@ Author:
Description
-----------
-This driver implements support for JEDEC JC 42.4 compliant temperature sensors.
+This driver implements support for JEDEC JC 42.4 compliant temperature sensors,
+which are used on many DDR3 memory modules for mobile devices and servers. Some
+systems use the sensor to prevent memory overheating by automatically throttling
+the memory controller.
+
The driver auto-detects the chips listed above, but can be manually instantiated
to support other JC 42.4 compliant chips.
@@ -81,15 +86,19 @@ limits. The chip supports only a single register to configure the hysteresis,
which applies to all limits. This register can be written by writing into
temp1_crit_hyst. Other hysteresis attributes are read-only.
+If the BIOS has configured the sensor for automatic temperature management, it
+is likely that it has locked the registers, i.e., that the temperature limits
+cannot be changed.
+
Sysfs entries
-------------
temp1_input Temperature (RO)
-temp1_min Minimum temperature (RW)
-temp1_max Maximum temperature (RW)
-temp1_crit Critical high temperature (RW)
+temp1_min Minimum temperature (RO or RW)
+temp1_max Maximum temperature (RO or RW)
+temp1_crit Critical high temperature (RO or RW)
-temp1_crit_hyst Critical hysteresis temperature (RW)
+temp1_crit_hyst Critical hysteresis temperature (RO or RW)
temp1_max_hyst Maximum hysteresis temperature (RO)
temp1_min_alarm Temperature low alarm
diff --git a/trunk/Documentation/hwmon/k10temp b/trunk/Documentation/hwmon/k10temp
index 6526eee525a6..d2b56a4fd1f5 100644
--- a/trunk/Documentation/hwmon/k10temp
+++ b/trunk/Documentation/hwmon/k10temp
@@ -9,6 +9,8 @@ Supported chips:
Socket S1G3: Athlon II, Sempron, Turion II
* AMD Family 11h processors:
Socket S1G2: Athlon (X2), Sempron (X2), Turion X2 (Ultra)
+* AMD Family 12h processors: "Llano"
+* AMD Family 14h processors: "Brazos" (C/E/G-Series)
Prefix: 'k10temp'
Addresses scanned: PCI space
@@ -17,10 +19,14 @@ Supported chips:
http://support.amd.com/us/Processor_TechDocs/31116.pdf
BIOS and Kernel Developer's Guide (BKDG) for AMD Family 11h Processors:
http://support.amd.com/us/Processor_TechDocs/41256.pdf
+ BIOS and Kernel Developer's Guide (BKDG) for AMD Family 14h Models 00h-0Fh Processors:
+ http://support.amd.com/us/Processor_TechDocs/43170.pdf
Revision Guide for AMD Family 10h Processors:
http://support.amd.com/us/Processor_TechDocs/41322.pdf
Revision Guide for AMD Family 11h Processors:
http://support.amd.com/us/Processor_TechDocs/41788.pdf
+ Revision Guide for AMD Family 14h Models 00h-0Fh Processors:
+ http://support.amd.com/us/Processor_TechDocs/47534.pdf
AMD Family 11h Processor Power and Thermal Data Sheet for Notebooks:
http://support.amd.com/us/Processor_TechDocs/43373.pdf
AMD Family 10h Server and Workstation Processor Power and Thermal Data Sheet:
@@ -34,7 +40,7 @@ Description
-----------
This driver permits reading of the internal temperature sensor of AMD
-Family 10h and 11h processors.
+Family 10h/11h/12h/14h processors.
All these processors have a sensor, but on those for Socket F or AM2+,
the sensor may return inconsistent values (erratum 319). The driver
diff --git a/trunk/Documentation/kernel-parameters.txt b/trunk/Documentation/kernel-parameters.txt
index 89835a4766a6..738c6fda3fb0 100644
--- a/trunk/Documentation/kernel-parameters.txt
+++ b/trunk/Documentation/kernel-parameters.txt
@@ -144,6 +144,11 @@ a fixed number of characters. This limit depends on the architecture
and is between 256 and 4096 characters. It is defined in the file
./include/asm/setup.h as COMMAND_LINE_SIZE.
+Finally, the [KMG] suffix is commonly described after a number of kernel
+parameter values. These 'K', 'M', and 'G' letters represent the _binary_
+multipliers 'Kilo', 'Mega', and 'Giga', equalling 2^10, 2^20, and 2^30
+bytes respectively. Such letter suffixes can also be entirely omitted.
+
acpi= [HW,ACPI,X86]
Advanced Configuration and Power Interface
@@ -545,16 +550,20 @@ and is between 256 and 4096 characters. It is defined in the file
Format:
,,,[,]
- crashkernel=nn[KMG]@ss[KMG]
- [KNL] Reserve a chunk of physical memory to
- hold a kernel to switch to with kexec on panic.
+ crashkernel=size[KMG][@offset[KMG]]
+ [KNL] Using kexec, Linux can switch to a 'crash kernel'
+ upon panic. This parameter reserves the physical
+ memory region [offset, offset + size] for that kernel
+ image. If '@offset' is omitted, then a suitable offset
+ is selected automatically. Check
+ Documentation/kdump/kdump.txt for further details.
crashkernel=range1:size1[,range2:size2,...][@offset]
[KNL] Same as above, but depends on the memory
in the running system. The syntax of range is
start-[end] where start and end are both
a memory unit (amount[KMG]). See also
- Documentation/kdump/kdump.txt for a example.
+ Documentation/kdump/kdump.txt for an example.
cs89x0_dma= [HW,NET]
Format:
@@ -1262,10 +1271,9 @@ and is between 256 and 4096 characters. It is defined in the file
6 (KERN_INFO) informational
7 (KERN_DEBUG) debug-level messages
- log_buf_len=n Sets the size of the printk ring buffer, in bytes.
- Format: { n | nk | nM }
- n must be a power of two. The default size
- is set in the kernel config file.
+ log_buf_len=n[KMG] Sets the size of the printk ring buffer,
+ in bytes. n must be a power of two. The default
+ size is set in the kernel config file.
logo.nologo [FB] Disables display of the built-in Linux logo.
This may be used to provide more screen space for
@@ -2436,6 +2444,10 @@ and is between 256 and 4096 characters. It is defined in the file
: poll all this frequency
0: no polling (default)
+ threadirqs [KNL]
+ Force threading of all interrupt handlers except those
+ marked explicitely IRQF_NO_THREAD.
+
topology= [S390]
Format: {off | on}
Specify if the kernel should make use of the cpu
diff --git a/trunk/Documentation/keys-request-key.txt b/trunk/Documentation/keys-request-key.txt
index 09b55e461740..69686ad12c66 100644
--- a/trunk/Documentation/keys-request-key.txt
+++ b/trunk/Documentation/keys-request-key.txt
@@ -127,14 +127,15 @@ This is because process A's keyrings can't simply be attached to
of them, and (b) it requires the same UID/GID/Groups all the way through.
-======================
-NEGATIVE INSTANTIATION
-======================
+====================================
+NEGATIVE INSTANTIATION AND REJECTION
+====================================
Rather than instantiating a key, it is possible for the possessor of an
authorisation key to negatively instantiate a key that's under construction.
This is a short duration placeholder that causes any attempt at re-requesting
-the key whilst it exists to fail with error ENOKEY.
+the key whilst it exists to fail with error ENOKEY if negated or the specified
+error if rejected.
This is provided to prevent excessive repeated spawning of /sbin/request-key
processes for a key that will never be obtainable.
diff --git a/trunk/Documentation/keys.txt b/trunk/Documentation/keys.txt
index e4dbbdb1bd96..6523a9e6f293 100644
--- a/trunk/Documentation/keys.txt
+++ b/trunk/Documentation/keys.txt
@@ -637,6 +637,9 @@ The keyctl syscall functions are:
long keyctl(KEYCTL_INSTANTIATE, key_serial_t key,
const void *payload, size_t plen,
key_serial_t keyring);
+ long keyctl(KEYCTL_INSTANTIATE_IOV, key_serial_t key,
+ const struct iovec *payload_iov, unsigned ioc,
+ key_serial_t keyring);
If the kernel calls back to userspace to complete the instantiation of a
key, userspace should use this call to supply data for the key before the
@@ -652,11 +655,16 @@ The keyctl syscall functions are:
The payload and plen arguments describe the payload data as for add_key().
+ The payload_iov and ioc arguments describe the payload data in an iovec
+ array instead of a single buffer.
+
(*) Negatively instantiate a partially constructed key.
long keyctl(KEYCTL_NEGATE, key_serial_t key,
unsigned timeout, key_serial_t keyring);
+ long keyctl(KEYCTL_REJECT, key_serial_t key,
+ unsigned timeout, unsigned error, key_serial_t keyring);
If the kernel calls back to userspace to complete the instantiation of a
key, userspace should use this call mark the key as negative before the
@@ -669,6 +677,10 @@ The keyctl syscall functions are:
that keyring, however all the constraints applying in KEYCTL_LINK apply in
this case too.
+ If the key is rejected, future searches for it will return the specified
+ error code until the rejected key expires. Negating the key is the same
+ as rejecting the key with ENOKEY as the error code.
+
(*) Set the default request-key destination keyring.
@@ -1062,6 +1074,13 @@ The structure has a number of fields, some of which are mandatory:
viable.
+ (*) int (*vet_description)(const char *description);
+
+ This optional method is called to vet a key description. If the key type
+ doesn't approve of the key description, it may return an error, otherwise
+ it should return 0.
+
+
(*) int (*instantiate)(struct key *key, const void *data, size_t datalen);
This method is called to attach a payload to a key during construction.
@@ -1231,10 +1250,11 @@ hand the request off to (perhaps a path held in placed in another key by, for
example, the KDE desktop manager).
The program (or whatever it calls) should finish construction of the key by
-calling KEYCTL_INSTANTIATE, which also permits it to cache the key in one of
-the keyrings (probably the session ring) before returning. Alternatively, the
-key can be marked as negative with KEYCTL_NEGATE; this also permits the key to
-be cached in one of the keyrings.
+calling KEYCTL_INSTANTIATE or KEYCTL_INSTANTIATE_IOV, which also permits it to
+cache the key in one of the keyrings (probably the session ring) before
+returning. Alternatively, the key can be marked as negative with KEYCTL_NEGATE
+or KEYCTL_REJECT; this also permits the key to be cached in one of the
+keyrings.
If it returns with the key remaining in the unconstructed state, the key will
be marked as being negative, it will be added to the session keyring, and an
diff --git a/trunk/Documentation/memory-barriers.txt b/trunk/Documentation/memory-barriers.txt
index 631ad2f1b229..f0d3a8026a56 100644
--- a/trunk/Documentation/memory-barriers.txt
+++ b/trunk/Documentation/memory-barriers.txt
@@ -21,6 +21,7 @@ Contents:
- SMP barrier pairing.
- Examples of memory barrier sequences.
- Read memory barriers vs load speculation.
+ - Transitivity
(*) Explicit kernel barriers.
@@ -959,6 +960,63 @@ the speculation will be cancelled and the value reloaded:
retrieved : : +-------+
+TRANSITIVITY
+------------
+
+Transitivity is a deeply intuitive notion about ordering that is not
+always provided by real computer systems. The following example
+demonstrates transitivity (also called "cumulativity"):
+
+ CPU 1 CPU 2 CPU 3
+ ======================= ======================= =======================
+ { X = 0, Y = 0 }
+ STORE X=1 LOAD X STORE Y=1
+
+ LOAD Y LOAD X
+
+Suppose that CPU 2's load from X returns 1 and its load from Y returns 0.
+This indicates that CPU 2's load from X in some sense follows CPU 1's
+store to X and that CPU 2's load from Y in some sense preceded CPU 3's
+store to Y. The question is then "Can CPU 3's load from X return 0?"
+
+Because CPU 2's load from X in some sense came after CPU 1's store, it
+is natural to expect that CPU 3's load from X must therefore return 1.
+This expectation is an example of transitivity: if a load executing on
+CPU A follows a load from the same variable executing on CPU B, then
+CPU A's load must either return the same value that CPU B's load did,
+or must return some later value.
+
+In the Linux kernel, use of general memory barriers guarantees
+transitivity. Therefore, in the above example, if CPU 2's load from X
+returns 1 and its load from Y returns 0, then CPU 3's load from X must
+also return 1.
+
+However, transitivity is -not- guaranteed for read or write barriers.
+For example, suppose that CPU 2's general barrier in the above example
+is changed to a read barrier as shown below:
+
+ CPU 1 CPU 2 CPU 3
+ ======================= ======================= =======================
+ { X = 0, Y = 0 }
+ STORE X=1 LOAD X STORE Y=1
+
+ LOAD Y LOAD X
+
+This substitution destroys transitivity: in this example, it is perfectly
+legal for CPU 2's load from X to return 1, its load from Y to return 0,
+and CPU 3's load from X to return 0.
+
+The key point is that although CPU 2's read barrier orders its pair
+of loads, it does not guarantee to order CPU 1's store. Therefore, if
+this example runs on a system where CPUs 1 and 2 share a store buffer
+or a level of cache, CPU 2 might have early access to CPU 1's writes.
+General barriers are therefore required to ensure that all CPUs agree
+on the combined order of CPU 1's and CPU 2's accesses.
+
+To reiterate, if your code requires transitivity, use general barriers
+throughout.
+
+
========================
EXPLICIT KERNEL BARRIERS
========================
diff --git a/trunk/Documentation/networking/00-INDEX b/trunk/Documentation/networking/00-INDEX
index fe5c099b8fc8..4edd78dfb362 100644
--- a/trunk/Documentation/networking/00-INDEX
+++ b/trunk/Documentation/networking/00-INDEX
@@ -40,8 +40,6 @@ decnet.txt
- info on using the DECnet networking layer in Linux.
depca.txt
- the Digital DEPCA/EtherWORKS DE1?? and DE2?? LANCE Ethernet driver
-dgrs.txt
- - the Digi International RightSwitch SE-X Ethernet driver
dmfe.txt
- info on the Davicom DM9102(A)/DM9132/DM9801 fast ethernet driver.
e100.txt
@@ -50,8 +48,6 @@ e1000.txt
- info on Intel's E1000 line of gigabit ethernet boards
eql.txt
- serial IP load balancing
-ethertap.txt
- - the Ethertap user space packet reception and transmission driver
ewrk3.txt
- the Digital EtherWORKS 3 DE203/4/5 Ethernet driver
filter.txt
@@ -104,8 +100,6 @@ tuntap.txt
- TUN/TAP device driver, allowing user space Rx/Tx of packets.
vortex.txt
- info on using 3Com Vortex (3c590, 3c592, 3c595, 3c597) Ethernet cards.
-wavelan.txt
- - AT&T GIS (nee NCR) WaveLAN card: An Ethernet-like radio transceiver
x25.txt
- general info on X.25 development.
x25-iface.txt
diff --git a/trunk/Documentation/networking/Makefile b/trunk/Documentation/networking/Makefile
index 5aba7a33aeeb..24c308dd3fd1 100644
--- a/trunk/Documentation/networking/Makefile
+++ b/trunk/Documentation/networking/Makefile
@@ -4,6 +4,8 @@ obj- := dummy.o
# List of programs to build
hostprogs-y := ifenslave
+HOSTCFLAGS_ifenslave.o += -I$(objtree)/usr/include
+
# Tell kbuild to always build the programs
always := $(hostprogs-y)
diff --git a/trunk/Documentation/networking/dns_resolver.txt b/trunk/Documentation/networking/dns_resolver.txt
index aefd1e681804..04ca06325b08 100644
--- a/trunk/Documentation/networking/dns_resolver.txt
+++ b/trunk/Documentation/networking/dns_resolver.txt
@@ -61,7 +61,6 @@ before the more general line given above as the first match is the one taken.
create dns_resolver foo:* * /usr/sbin/dns.foo %k
-
=====
USAGE
=====
@@ -104,6 +103,14 @@ implemented in the module can be called after doing:
returned also.
+===============================
+READING DNS KEYS FROM USERSPACE
+===============================
+
+Keys of dns_resolver type can be read from userspace using keyctl_read() or
+"keyctl read/print/pipe".
+
+
=========
MECHANISM
=========
diff --git a/trunk/Documentation/power/devices.txt b/trunk/Documentation/power/devices.txt
index 57080cd74575..f023ba6bba62 100644
--- a/trunk/Documentation/power/devices.txt
+++ b/trunk/Documentation/power/devices.txt
@@ -1,6 +1,6 @@
Device Power Management
-Copyright (c) 2010 Rafael J. Wysocki , Novell Inc.
+Copyright (c) 2010-2011 Rafael J. Wysocki , Novell Inc.
Copyright (c) 2010 Alan Stern
@@ -159,18 +159,18 @@ matter, and the kernel is responsible for keeping track of it. By contrast,
whether or not a wakeup-capable device should issue wakeup events is a policy
decision, and it is managed by user space through a sysfs attribute: the
power/wakeup file. User space can write the strings "enabled" or "disabled" to
-set or clear the should_wakeup flag, respectively. Reads from the file will
-return the corresponding string if can_wakeup is true, but if can_wakeup is
-false then reads will return an empty string, to indicate that the device
-doesn't support wakeup events. (But even though the file appears empty, writes
-will still affect the should_wakeup flag.)
+set or clear the "should_wakeup" flag, respectively. This file is only present
+for wakeup-capable devices (i.e. devices whose "can_wakeup" flags are set)
+and is created (or removed) by device_set_wakeup_capable(). Reads from the
+file will return the corresponding string.
The device_may_wakeup() routine returns true only if both flags are set.
-Drivers should check this routine when putting devices in a low-power state
-during a system sleep transition, to see whether or not to enable the devices'
-wakeup mechanisms. However for runtime power management, wakeup events should
-be enabled whenever the device and driver both support them, regardless of the
-should_wakeup flag.
+This information is used by subsystems, like the PCI bus type code, to see
+whether or not to enable the devices' wakeup mechanisms. If device wakeup
+mechanisms are enabled or disabled directly by drivers, they also should use
+device_may_wakeup() to decide what to do during a system sleep transition.
+However for runtime power management, wakeup events should be enabled whenever
+the device and driver both support them, regardless of the should_wakeup flag.
/sys/devices/.../power/control files
@@ -249,23 +249,18 @@ various phases always run after tasks have been frozen and before they are
unfrozen. Furthermore, the *_noirq phases run at a time when IRQ handlers have
been disabled (except for those marked with the IRQ_WAKEUP flag).
-Most phases use bus, type, and class callbacks (that is, methods defined in
-dev->bus->pm, dev->type->pm, and dev->class->pm). The prepare and complete
-phases are exceptions; they use only bus callbacks. When multiple callbacks
-are used in a phase, they are invoked in the order: during
-power-down transitions and in the opposite order during power-up transitions.
-For example, during the suspend phase the PM core invokes
-
- dev->class->pm.suspend(dev);
- dev->type->pm.suspend(dev);
- dev->bus->pm.suspend(dev);
-
-before moving on to the next device, whereas during the resume phase the core
-invokes
-
- dev->bus->pm.resume(dev);
- dev->type->pm.resume(dev);
- dev->class->pm.resume(dev);
+All phases use bus, type, or class callbacks (that is, methods defined in
+dev->bus->pm, dev->type->pm, or dev->class->pm). These callbacks are mutually
+exclusive, so if the device type provides a struct dev_pm_ops object pointed to
+by its pm field (i.e. both dev->type and dev->type->pm are defined), the
+callbacks included in that object (i.e. dev->type->pm) will be used. Otherwise,
+if the class provides a struct dev_pm_ops object pointed to by its pm field
+(i.e. both dev->class and dev->class->pm are defined), the PM core will use the
+callbacks from that object (i.e. dev->class->pm). Finally, if the pm fields of
+both the device type and class objects are NULL (or those objects do not exist),
+the callbacks provided by the bus (that is, the callbacks from dev->bus->pm)
+will be used (this allows device types to override callbacks provided by bus
+types or classes if necessary).
These callbacks may in turn invoke device- or driver-specific methods stored in
dev->driver->pm, but they don't have to.
@@ -507,6 +502,49 @@ routines. Nevertheless, different callback pointers are used in case there is a
situation where it actually matters.
+Device Power Domains
+--------------------
+Sometimes devices share reference clocks or other power resources. In those
+cases it generally is not possible to put devices into low-power states
+individually. Instead, a set of devices sharing a power resource can be put
+into a low-power state together at the same time by turning off the shared
+power resource. Of course, they also need to be put into the full-power state
+together, by turning the shared power resource on. A set of devices with this
+property is often referred to as a power domain.
+
+Support for power domains is provided through the pwr_domain field of struct
+device. This field is a pointer to an object of type struct dev_power_domain,
+defined in include/linux/pm.h, providing a set of power management callbacks
+analogous to the subsystem-level and device driver callbacks that are executed
+for the given device during all power transitions, in addition to the respective
+subsystem-level callbacks. Specifically, the power domain "suspend" callbacks
+(i.e. ->runtime_suspend(), ->suspend(), ->freeze(), ->poweroff(), etc.) are
+executed after the analogous subsystem-level callbacks, while the power domain
+"resume" callbacks (i.e. ->runtime_resume(), ->resume(), ->thaw(), ->restore,
+etc.) are executed before the analogous subsystem-level callbacks. Error codes
+returned by the "suspend" and "resume" power domain callbacks are ignored.
+
+Power domain ->runtime_idle() callback is executed before the subsystem-level
+->runtime_idle() callback and the result returned by it is not ignored. Namely,
+if it returns error code, the subsystem-level ->runtime_idle() callback will not
+be called and the helper function rpm_idle() executing it will return error
+code. This mechanism is intended to help platforms where saving device state
+is a time consuming operation and should only be carried out if all devices
+in the power domain are idle, before turning off the shared power resource(s).
+Namely, the power domain ->runtime_idle() callback may return error code until
+the pm_runtime_idle() helper (or its asychronous version) has been called for
+all devices in the power domain (it is recommended that the returned error code
+be -EBUSY in those cases), preventing the subsystem-level ->runtime_idle()
+callback from being run prematurely.
+
+The support for device power domains is only relevant to platforms needing to
+use the same subsystem-level (e.g. platform bus type) and device driver power
+management callbacks in many different power domain configurations and wanting
+to avoid incorporating the support for power domains into the subsystem-level
+callbacks. The other platforms need not implement it or take it into account
+in any way.
+
+
System Devices
--------------
System devices (sysdevs) follow a slightly different API, which can be found in
diff --git a/trunk/Documentation/power/runtime_pm.txt b/trunk/Documentation/power/runtime_pm.txt
index ffe55ffa540a..654097b130b4 100644
--- a/trunk/Documentation/power/runtime_pm.txt
+++ b/trunk/Documentation/power/runtime_pm.txt
@@ -1,6 +1,6 @@
Run-time Power Management Framework for I/O Devices
-(C) 2009 Rafael J. Wysocki , Novell Inc.
+(C) 2009-2011 Rafael J. Wysocki , Novell Inc.
(C) 2010 Alan Stern
1. Introduction
@@ -44,11 +44,12 @@ struct dev_pm_ops {
};
The ->runtime_suspend(), ->runtime_resume() and ->runtime_idle() callbacks are
-executed by the PM core for either the bus type, or device type (if the bus
-type's callback is not defined), or device class (if the bus type's and device
-type's callbacks are not defined) of given device. The bus type, device type
-and device class callbacks are referred to as subsystem-level callbacks in what
-follows.
+executed by the PM core for either the device type, or the class (if the device
+type's struct dev_pm_ops object does not exist), or the bus type (if the
+device type's and class' struct dev_pm_ops objects do not exist) of the given
+device (this allows device types to override callbacks provided by bus types or
+classes if necessary). The bus type, device type and class callbacks are
+referred to as subsystem-level callbacks in what follows.
By default, the callbacks are always invoked in process context with interrupts
enabled. However, subsystems can use the pm_runtime_irq_safe() helper function
diff --git a/trunk/Documentation/power/states.txt b/trunk/Documentation/power/states.txt
index 34800cc521bf..4416b28630df 100644
--- a/trunk/Documentation/power/states.txt
+++ b/trunk/Documentation/power/states.txt
@@ -62,12 +62,12 @@ setup via another operating system for it to use. Despite the
inconvenience, this method requires minimal work by the kernel, since
the firmware will also handle restoring memory contents on resume.
-For suspend-to-disk, a mechanism called swsusp called 'swsusp' (Swap
-Suspend) is used to write memory contents to free swap space.
-swsusp has some restrictive requirements, but should work in most
-cases. Some, albeit outdated, documentation can be found in
-Documentation/power/swsusp.txt. Alternatively, userspace can do most
-of the actual suspend to disk work, see userland-swsusp.txt.
+For suspend-to-disk, a mechanism called 'swsusp' (Swap Suspend) is used
+to write memory contents to free swap space. swsusp has some restrictive
+requirements, but should work in most cases. Some, albeit outdated,
+documentation can be found in Documentation/power/swsusp.txt.
+Alternatively, userspace can do most of the actual suspend to disk work,
+see userland-swsusp.txt.
Once memory state is written to disk, the system may either enter a
low-power state (like ACPI S4), or it may simply power down. Powering
diff --git a/trunk/Documentation/rtc.txt b/trunk/Documentation/rtc.txt
index 9104c1062084..250160469d83 100644
--- a/trunk/Documentation/rtc.txt
+++ b/trunk/Documentation/rtc.txt
@@ -178,38 +178,29 @@ RTC class framework, but can't be supported by the older driver.
setting the longer alarm time and enabling its IRQ using a single
request (using the same model as EFI firmware).
- * RTC_UIE_ON, RTC_UIE_OFF ... if the RTC offers IRQs, it probably
- also offers update IRQs whenever the "seconds" counter changes.
- If needed, the RTC framework can emulate this mechanism.
+ * RTC_UIE_ON, RTC_UIE_OFF ... if the RTC offers IRQs, the RTC framework
+ will emulate this mechanism.
- * RTC_PIE_ON, RTC_PIE_OFF, RTC_IRQP_SET, RTC_IRQP_READ ... another
- feature often accessible with an IRQ line is a periodic IRQ, issued
- at settable frequencies (usually 2^N Hz).
+ * RTC_PIE_ON, RTC_PIE_OFF, RTC_IRQP_SET, RTC_IRQP_READ ... these icotls
+ are emulated via a kernel hrtimer.
In many cases, the RTC alarm can be a system wake event, used to force
Linux out of a low power sleep state (or hibernation) back to a fully
operational state. For example, a system could enter a deep power saving
state until it's time to execute some scheduled tasks.
-Note that many of these ioctls need not actually be implemented by your
-driver. The common rtc-dev interface handles many of these nicely if your
-driver returns ENOIOCTLCMD. Some common examples:
+Note that many of these ioctls are handled by the common rtc-dev interface.
+Some common examples:
* RTC_RD_TIME, RTC_SET_TIME: the read_time/set_time functions will be
called with appropriate values.
- * RTC_ALM_SET, RTC_ALM_READ, RTC_WKALM_SET, RTC_WKALM_RD: the
- set_alarm/read_alarm functions will be called.
+ * RTC_ALM_SET, RTC_ALM_READ, RTC_WKALM_SET, RTC_WKALM_RD: gets or sets
+ the alarm rtc_timer. May call the set_alarm driver function.
- * RTC_IRQP_SET, RTC_IRQP_READ: the irq_set_freq function will be called
- to set the frequency while the framework will handle the read for you
- since the frequency is stored in the irq_freq member of the rtc_device
- structure. Your driver needs to initialize the irq_freq member during
- init. Make sure you check the requested frequency is in range of your
- hardware in the irq_set_freq function. If it isn't, return -EINVAL. If
- you cannot actually change the frequency, do not define irq_set_freq.
+ * RTC_IRQP_SET, RTC_IRQP_READ: These are emulated by the generic code.
- * RTC_PIE_ON, RTC_PIE_OFF: the irq_set_state function will be called.
+ * RTC_PIE_ON, RTC_PIE_OFF: These are also emulated by the generic code.
If all else fails, check out the rtc-test.c driver!
diff --git a/trunk/Documentation/spinlocks.txt b/trunk/Documentation/spinlocks.txt
index 178c831b907d..2e3c64b1a6a5 100644
--- a/trunk/Documentation/spinlocks.txt
+++ b/trunk/Documentation/spinlocks.txt
@@ -86,7 +86,7 @@ to change the variables it has to get an exclusive write lock.
The routines look the same as above:
- rwlock_t xxx_lock = RW_LOCK_UNLOCKED;
+ rwlock_t xxx_lock = __RW_LOCK_UNLOCKED(xxx_lock);
unsigned long flags;
@@ -196,25 +196,3 @@ appropriate:
For static initialization, use DEFINE_SPINLOCK() / DEFINE_RWLOCK() or
__SPIN_LOCK_UNLOCKED() / __RW_LOCK_UNLOCKED() as appropriate.
-
-SPIN_LOCK_UNLOCKED and RW_LOCK_UNLOCKED are deprecated. These interfere
-with lockdep state tracking.
-
-Most of the time, you can simply turn:
- static spinlock_t xxx_lock = SPIN_LOCK_UNLOCKED;
-into:
- static DEFINE_SPINLOCK(xxx_lock);
-
-Static structure member variables go from:
-
- struct foo bar {
- .lock = SPIN_LOCK_UNLOCKED;
- };
-
-to:
-
- struct foo bar {
- .lock = __SPIN_LOCK_UNLOCKED(bar.lock);
- };
-
-Declaration of static rw_locks undergo a similar transformation.
diff --git a/trunk/Documentation/trace/ftrace-design.txt b/trunk/Documentation/trace/ftrace-design.txt
index dc52bd442c92..79fcafc7fd64 100644
--- a/trunk/Documentation/trace/ftrace-design.txt
+++ b/trunk/Documentation/trace/ftrace-design.txt
@@ -247,6 +247,13 @@ You need very few things to get the syscalls tracing in an arch.
- Support the TIF_SYSCALL_TRACEPOINT thread flags.
- Put the trace_sys_enter() and trace_sys_exit() tracepoints calls from ptrace
in the ptrace syscalls tracing path.
+- If the system call table on this arch is more complicated than a simple array
+ of addresses of the system calls, implement an arch_syscall_addr to return
+ the address of a given system call.
+- If the symbol names of the system calls do not match the function names on
+ this arch, define ARCH_HAS_SYSCALL_MATCH_SYM_NAME in asm/ftrace.h and
+ implement arch_syscall_match_sym_name with the appropriate logic to return
+ true if the function name corresponds with the symbol name.
- Tag this arch as HAVE_SYSCALL_TRACEPOINTS.
diff --git a/trunk/Documentation/trace/ftrace.txt b/trunk/Documentation/trace/ftrace.txt
index 557c1edeccaf..1ebc24cf9a55 100644
--- a/trunk/Documentation/trace/ftrace.txt
+++ b/trunk/Documentation/trace/ftrace.txt
@@ -80,11 +80,11 @@ of ftrace. Here is a list of some of the key files:
tracers listed here can be configured by
echoing their name into current_tracer.
- tracing_enabled:
+ tracing_on:
- This sets or displays whether the current_tracer
- is activated and tracing or not. Echo 0 into this
- file to disable the tracer or 1 to enable it.
+ This sets or displays whether writing to the trace
+ ring buffer is enabled. Echo 0 into this file to disable
+ the tracer or 1 to enable it.
trace:
@@ -202,10 +202,6 @@ Here is the list of current tracers that may be configured.
to draw a graph of function calls similar to C code
source.
- "sched_switch"
-
- Traces the context switches and wakeups between tasks.
-
"irqsoff"
Traces the areas that disable interrupts and saves
@@ -273,39 +269,6 @@ format, the function name that was traced "path_put" and the
parent function that called this function "path_walk". The
timestamp is the time at which the function was entered.
-The sched_switch tracer also includes tracing of task wakeups
-and context switches.
-
- ksoftirqd/1-7 [01] 1453.070013: 7:115:R + 2916:115:S
- ksoftirqd/1-7 [01] 1453.070013: 7:115:R + 10:115:S
- ksoftirqd/1-7 [01] 1453.070013: 7:115:R ==> 10:115:R
- events/1-10 [01] 1453.070013: 10:115:S ==> 2916:115:R
- kondemand/1-2916 [01] 1453.070013: 2916:115:S ==> 7:115:R
- ksoftirqd/1-7 [01] 1453.070013: 7:115:S ==> 0:140:R
-
-Wake ups are represented by a "+" and the context switches are
-shown as "==>". The format is:
-
- Context switches:
-
- Previous task Next Task
-
- :: ==> ::
-
- Wake ups:
-
- Current task Task waking up
-
- :: + ::
-
-The prio is the internal kernel priority, which is the inverse
-of the priority that is usually displayed by user-space tools.
-Zero represents the highest priority (99). Prio 100 starts the
-"nice" priorities with 100 being equal to nice -20 and 139 being
-nice 19. The prio "140" is reserved for the idle task which is
-the lowest priority thread (pid 0).
-
-
Latency trace format
--------------------
@@ -491,78 +454,10 @@ x494] <- /root/a.out[+0x4a8] <- /lib/libc-2.7.so[+0x1e1a6]
latencies, as described in "Latency
trace format".
-sched_switch
-------------
-
-This tracer simply records schedule switches. Here is an example
-of how to use it.
-
- # echo sched_switch > current_tracer
- # echo 1 > tracing_enabled
- # sleep 1
- # echo 0 > tracing_enabled
- # cat trace
-
-# tracer: sched_switch
-#
-# TASK-PID CPU# TIMESTAMP FUNCTION
-# | | | | |
- bash-3997 [01] 240.132281: 3997:120:R + 4055:120:R
- bash-3997 [01] 240.132284: 3997:120:R ==> 4055:120:R
- sleep-4055 [01] 240.132371: 4055:120:S ==> 3997:120:R
- bash-3997 [01] 240.132454: 3997:120:R + 4055:120:S
- bash-3997 [01] 240.132457: 3997:120:R ==> 4055:120:R
- sleep-4055 [01] 240.132460: 4055:120:D ==> 3997:120:R
- bash-3997 [01] 240.132463: 3997:120:R + 4055:120:D
- bash-3997 [01] 240.132465: 3997:120:R ==> 4055:120:R
- -0 [00] 240.132589: 0:140:R + 4:115:S
- -0 [00] 240.132591: 0:140:R ==> 4:115:R
- ksoftirqd/0-4 [00] 240.132595: 4:115:S ==> 0:140:R
- -0 [00] 240.132598: 0:140:R + 4:115:S
- -0 [00] 240.132599: 0:140:R ==> 4:115:R
- ksoftirqd/0-4 [00] 240.132603: 4:115:S ==> 0:140:R
- sleep-4055 [01] 240.133058: 4055:120:S ==> 3997:120:R
- [...]
-
-
-As we have discussed previously about this format, the header
-shows the name of the trace and points to the options. The
-"FUNCTION" is a misnomer since here it represents the wake ups
-and context switches.
-
-The sched_switch file only lists the wake ups (represented with
-'+') and context switches ('==>') with the previous task or
-current task first followed by the next task or task waking up.
-The format for both of these is PID:KERNEL-PRIO:TASK-STATE.
-Remember that the KERNEL-PRIO is the inverse of the actual
-priority with zero (0) being the highest priority and the nice
-values starting at 100 (nice -20). Below is a quick chart to map
-the kernel priority to user land priorities.
-
- Kernel Space User Space
- ===============================================================
- 0(high) to 98(low) user RT priority 99(high) to 1(low)
- with SCHED_RR or SCHED_FIFO
- ---------------------------------------------------------------
- 99 sched_priority is not used in scheduling
- decisions(it must be specified as 0)
- ---------------------------------------------------------------
- 100(high) to 139(low) user nice -20(high) to 19(low)
- ---------------------------------------------------------------
- 140 idle task priority
- ---------------------------------------------------------------
-
-The task states are:
-
- R - running : wants to run, may not actually be running
- S - sleep : process is waiting to be woken up (handles signals)
- D - disk sleep (uninterruptible sleep) : process must be woken up
- (ignores signals)
- T - stopped : process suspended
- t - traced : process is being traced (with something like gdb)
- Z - zombie : process waiting to be cleaned up
- X - unknown
-
+ overwrite - This controls what happens when the trace buffer is
+ full. If "1" (default), the oldest events are
+ discarded and overwritten. If "0", then the newest
+ events are discarded.
ftrace_enabled
--------------
@@ -607,10 +502,10 @@ an example:
# echo irqsoff > current_tracer
# echo latency-format > trace_options
# echo 0 > tracing_max_latency
- # echo 1 > tracing_enabled
+ # echo 1 > tracing_on
# ls -ltr
[...]
- # echo 0 > tracing_enabled
+ # echo 0 > tracing_on
# cat trace
# tracer: irqsoff
#
@@ -715,10 +610,10 @@ is much like the irqsoff tracer.
# echo preemptoff > current_tracer
# echo latency-format > trace_options
# echo 0 > tracing_max_latency
- # echo 1 > tracing_enabled
+ # echo 1 > tracing_on
# ls -ltr
[...]
- # echo 0 > tracing_enabled
+ # echo 0 > tracing_on
# cat trace
# tracer: preemptoff
#
@@ -863,10 +758,10 @@ tracers.
# echo preemptirqsoff > current_tracer
# echo latency-format > trace_options
# echo 0 > tracing_max_latency
- # echo 1 > tracing_enabled
+ # echo 1 > tracing_on
# ls -ltr
[...]
- # echo 0 > tracing_enabled
+ # echo 0 > tracing_on
# cat trace
# tracer: preemptirqsoff
#
@@ -1026,9 +921,9 @@ Instead of performing an 'ls', we will run 'sleep 1' under
# echo wakeup > current_tracer
# echo latency-format > trace_options
# echo 0 > tracing_max_latency
- # echo 1 > tracing_enabled
+ # echo 1 > tracing_on
# chrt -f 5 sleep 1
- # echo 0 > tracing_enabled
+ # echo 0 > tracing_on
# cat trace
# tracer: wakeup
#
@@ -1140,9 +1035,9 @@ ftrace_enabled is set; otherwise this tracer is a nop.
# sysctl kernel.ftrace_enabled=1
# echo function > current_tracer
- # echo 1 > tracing_enabled
+ # echo 1 > tracing_on
# usleep 1
- # echo 0 > tracing_enabled
+ # echo 0 > tracing_on
# cat trace
# tracer: function
#
@@ -1180,7 +1075,7 @@ int trace_fd;
[...]
int main(int argc, char *argv[]) {
[...]
- trace_fd = open(tracing_file("tracing_enabled"), O_WRONLY);
+ trace_fd = open(tracing_file("tracing_on"), O_WRONLY);
[...]
if (condition_hit()) {
write(trace_fd, "0", 1);
@@ -1631,9 +1526,9 @@ If I am only interested in sys_nanosleep and hrtimer_interrupt:
# echo sys_nanosleep hrtimer_interrupt \
> set_ftrace_filter
# echo function > current_tracer
- # echo 1 > tracing_enabled
+ # echo 1 > tracing_on
# usleep 1
- # echo 0 > tracing_enabled
+ # echo 0 > tracing_on
# cat trace
# tracer: ftrace
#
@@ -1879,9 +1774,9 @@ different. The trace is live.
# echo function > current_tracer
# cat trace_pipe > /tmp/trace.out &
[1] 4153
- # echo 1 > tracing_enabled
+ # echo 1 > tracing_on
# usleep 1
- # echo 0 > tracing_enabled
+ # echo 0 > tracing_on
# cat trace
# tracer: function
#
diff --git a/trunk/Documentation/trace/kprobetrace.txt b/trunk/Documentation/trace/kprobetrace.txt
index 5f77d94598dd..6d27ab8d6e9f 100644
--- a/trunk/Documentation/trace/kprobetrace.txt
+++ b/trunk/Documentation/trace/kprobetrace.txt
@@ -42,11 +42,25 @@ Synopsis of kprobe_events
+|-offs(FETCHARG) : Fetch memory at FETCHARG +|- offs address.(**)
NAME=FETCHARG : Set NAME as the argument name of FETCHARG.
FETCHARG:TYPE : Set TYPE as the type of FETCHARG. Currently, basic types
- (u8/u16/u32/u64/s8/s16/s32/s64) and string are supported.
+ (u8/u16/u32/u64/s8/s16/s32/s64), "string" and bitfield
+ are supported.
(*) only for return probe.
(**) this is useful for fetching a field of data structures.
+Types
+-----
+Several types are supported for fetch-args. Kprobe tracer will access memory
+by given type. Prefix 's' and 'u' means those types are signed and unsigned
+respectively. Traced arguments are shown in decimal (signed) or hex (unsigned).
+String type is a special type, which fetches a "null-terminated" string from
+kernel space. This means it will fail and store NULL if the string container
+has been paged out.
+Bitfield is another special type, which takes 3 parameters, bit-width, bit-
+offset, and container-size (usually 32). The syntax is;
+
+ b@/
+
Per-Probe Event Filtering
-------------------------
diff --git a/trunk/Documentation/workqueue.txt b/trunk/Documentation/workqueue.txt
index 996a27d9b8db..01c513fac40e 100644
--- a/trunk/Documentation/workqueue.txt
+++ b/trunk/Documentation/workqueue.txt
@@ -190,9 +190,9 @@ resources, scheduled and executed.
* Long running CPU intensive workloads which can be better
managed by the system scheduler.
- WQ_FREEZEABLE
+ WQ_FREEZABLE
- A freezeable wq participates in the freeze phase of the system
+ A freezable wq participates in the freeze phase of the system
suspend operations. Work items on the wq are drained and no
new work item starts execution until thawed.
diff --git a/trunk/MAINTAINERS b/trunk/MAINTAINERS
index 5dd6c751e6a6..2a2cddd2f88c 100644
--- a/trunk/MAINTAINERS
+++ b/trunk/MAINTAINERS
@@ -557,6 +557,13 @@ S: Maintained
F: drivers/net/appletalk/
F: net/appletalk/
+ARASAN COMPACT FLASH PATA CONTROLLER
+M: Viresh Kumar
+L: linux-ide@vger.kernel.org
+S: Maintained
+F: include/linux/pata_arasan_cf_data.h
+F: drivers/ata/pata_arasan_cf.c
+
ARC FRAMEBUFFER DRIVER
M: Jaya Kumar
S: Maintained
@@ -885,7 +892,7 @@ S: Supported
ARM/QUALCOMM MSM MACHINE SUPPORT
M: David Brown
-M: Daniel Walker
+M: Daniel Walker
M: Bryan Huntsman
L: linux-arm-msm@vger.kernel.org
F: arch/arm/mach-msm/
@@ -1010,6 +1017,15 @@ L: linux-samsung-soc@vger.kernel.org (moderated for non-subscribers)
S: Maintained
F: arch/arm/mach-s5p*/
+ARM/SAMSUNG MOBILE MACHINE SUPPORT
+M: Kyungmin Park
+L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
+S: Maintained
+F: arch/arm/mach-s5pv210/mach-aquila.c
+F: arch/arm/mach-s5pv210/mach-goni.c
+F: arch/arm/mach-exynos4/mach-universal_c210.c
+F: arch/arm/mach-exynos4/mach-nuri.c
+
ARM/SAMSUNG S5P SERIES FIMC SUPPORT
M: Kyungmin Park
M: Sylwester Nawrocki
@@ -1467,6 +1483,7 @@ F: include/net/bluetooth/
BONDING DRIVER
M: Jay Vosburgh
+M: Andy Gospodarek
L: netdev@vger.kernel.org
W: http://sourceforge.net/projects/bonding/
S: Supported
@@ -1692,6 +1709,13 @@ M: Andy Whitcroft
S: Supported
F: scripts/checkpatch.pl
+CHINESE DOCUMENTATION
+M: Harry Wei
+L: xiyoulinuxkernelgroup@googlegroups.com
+L: linux-kernel@zh-kernel.org (moderated for non-subscribers)
+S: Maintained
+F: Documentation/zh_CN/
+
CISCO VIC ETHERNET NIC DRIVER
M: Vasanthy Kolluri
M: Roopa Prabhu
@@ -2026,7 +2050,7 @@ F: Documentation/scsi/dc395x.txt
F: drivers/scsi/dc395x.*
DCCP PROTOCOL
-M: Arnaldo Carvalho de Melo
+M: Gerrit Renker
L: dccp@vger.kernel.org
W: http://www.linuxfoundation.org/collaborate/workgroups/networking/dccp
S: Maintained
@@ -2873,7 +2897,6 @@ M: Guenter Roeck
L: lm-sensors@lm-sensors.org
W: http://www.lm-sensors.org/
T: quilt kernel.org/pub/linux/kernel/people/jdelvare/linux-2.6/jdelvare-hwmon/
-T: quilt kernel.org/pub/linux/kernel/people/groeck/linux-staging/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/groeck/linux-staging.git
S: Maintained
F: Documentation/hwmon/
@@ -3513,7 +3536,7 @@ F: drivers/hwmon/jc42.c
F: Documentation/hwmon/jc42
JFS FILESYSTEM
-M: Dave Kleikamp
+M: Dave Kleikamp
L: jfs-discussion@lists.sourceforge.net
W: http://jfs.sourceforge.net/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/shaggy/jfs-2.6.git
@@ -4276,10 +4299,7 @@ S: Maintained
F: net/sched/sch_netem.c
NETERION 10GbE DRIVERS (s2io/vxge)
-M: Ramkrishna Vepa
-M: Sivakumar Subramani
-M: Sreenivasa Honnur
-M: Jon Mason
+M: Jon Mason
L: netdev@vger.kernel.org
W: http://trac.neterion.com/cgi-bin/trac.cgi/wiki/Linux?Anonymous
W: http://trac.neterion.com/cgi-bin/trac.cgi/wiki/X3100Linux?Anonymous
@@ -5165,6 +5185,7 @@ F: drivers/char/random.c
RAPIDIO SUBSYSTEM
M: Matt Porter
+M: Alexandre Bounine
S: Maintained
F: drivers/rapidio/
@@ -5267,7 +5288,7 @@ S: Maintained
F: drivers/net/wireless/rtl818x/rtl8180/
RTL8187 WIRELESS DRIVER
-M: Herton Ronaldo Krzesinski
+M: Herton Ronaldo Krzesinski
M: Hin-Tak Leung
M: Larry Finger
L: linux-wireless@vger.kernel.org
@@ -6105,7 +6126,7 @@ S: Maintained
F: security/tomoyo/
TOPSTAR LAPTOP EXTRAS DRIVER
-M: Herton Ronaldo Krzesinski
+M: Herton Ronaldo Krzesinski
L: platform-driver-x86@vger.kernel.org
S: Maintained
F: drivers/platform/x86/topstar-laptop.c
diff --git a/trunk/Makefile b/trunk/Makefile
index 5e40aa2acbff..d6592b63c8cb 100644
--- a/trunk/Makefile
+++ b/trunk/Makefile
@@ -1,7 +1,7 @@
VERSION = 2
PATCHLEVEL = 6
SUBLEVEL = 38
-EXTRAVERSION = -rc5
+EXTRAVERSION =
NAME = Flesh-Eating Bats with Fangs
# *DOCUMENTATION*
diff --git a/trunk/arch/alpha/Kconfig b/trunk/arch/alpha/Kconfig
index 47f63d480141..cc31bec2e316 100644
--- a/trunk/arch/alpha/Kconfig
+++ b/trunk/arch/alpha/Kconfig
@@ -11,6 +11,7 @@ config ALPHA
select HAVE_GENERIC_HARDIRQS
select GENERIC_IRQ_PROBE
select AUTO_IRQ_AFFINITY if SMP
+ select GENERIC_HARDIRQS_NO_DEPRECATED
help
The Alpha is a 64-bit general-purpose processor designed and
marketed by the Digital Equipment Corporation of blessed memory,
diff --git a/trunk/arch/alpha/include/asm/fcntl.h b/trunk/arch/alpha/include/asm/fcntl.h
index 70145cbb21cb..1b71ca70c9f6 100644
--- a/trunk/arch/alpha/include/asm/fcntl.h
+++ b/trunk/arch/alpha/include/asm/fcntl.h
@@ -31,6 +31,8 @@
#define __O_SYNC 020000000
#define O_SYNC (__O_SYNC|O_DSYNC)
+#define O_PATH 040000000
+
#define F_GETLK 7
#define F_SETLK 8
#define F_SETLKW 9
diff --git a/trunk/arch/alpha/include/asm/futex.h b/trunk/arch/alpha/include/asm/futex.h
index 945de222ab91..e8a761aee088 100644
--- a/trunk/arch/alpha/include/asm/futex.h
+++ b/trunk/arch/alpha/include/asm/futex.h
@@ -29,7 +29,7 @@
: "r" (uaddr), "r"(oparg) \
: "memory")
-static inline int futex_atomic_op_inuser (int encoded_op, int __user *uaddr)
+static inline int futex_atomic_op_inuser (int encoded_op, u32 __user *uaddr)
{
int op = (encoded_op >> 28) & 7;
int cmp = (encoded_op >> 24) & 15;
@@ -39,7 +39,7 @@ static inline int futex_atomic_op_inuser (int encoded_op, int __user *uaddr)
if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28))
oparg = 1 << oparg;
- if (!access_ok(VERIFY_WRITE, uaddr, sizeof(int)))
+ if (!access_ok(VERIFY_WRITE, uaddr, sizeof(u32)))
return -EFAULT;
pagefault_disable();
@@ -81,21 +81,23 @@ static inline int futex_atomic_op_inuser (int encoded_op, int __user *uaddr)
}
static inline int
-futex_atomic_cmpxchg_inatomic(int __user *uaddr, int oldval, int newval)
+futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
+ u32 oldval, u32 newval)
{
- int prev, cmp;
+ int ret = 0, cmp;
+ u32 prev;
- if (!access_ok(VERIFY_WRITE, uaddr, sizeof(int)))
+ if (!access_ok(VERIFY_WRITE, uaddr, sizeof(u32)))
return -EFAULT;
__asm__ __volatile__ (
__ASM_SMP_MB
- "1: ldl_l %0,0(%2)\n"
- " cmpeq %0,%3,%1\n"
- " beq %1,3f\n"
- " mov %4,%1\n"
- "2: stl_c %1,0(%2)\n"
- " beq %1,4f\n"
+ "1: ldl_l %1,0(%3)\n"
+ " cmpeq %1,%4,%2\n"
+ " beq %2,3f\n"
+ " mov %5,%2\n"
+ "2: stl_c %2,0(%3)\n"
+ " beq %2,4f\n"
"3: .subsection 2\n"
"4: br 1b\n"
" .previous\n"
@@ -105,11 +107,12 @@ futex_atomic_cmpxchg_inatomic(int __user *uaddr, int oldval, int newval)
" .long 2b-.\n"
" lda $31,3b-2b(%0)\n"
" .previous\n"
- : "=&r"(prev), "=&r"(cmp)
+ : "+r"(ret), "=&r"(prev), "=&r"(cmp)
: "r"(uaddr), "r"((long)oldval), "r"(newval)
: "memory");
- return prev;
+ *uval = prev;
+ return ret;
}
#endif /* __KERNEL__ */
diff --git a/trunk/arch/alpha/include/asm/rwsem.h b/trunk/arch/alpha/include/asm/rwsem.h
index 1570c0b54336..a83bbea62c67 100644
--- a/trunk/arch/alpha/include/asm/rwsem.h
+++ b/trunk/arch/alpha/include/asm/rwsem.h
@@ -13,44 +13,13 @@
#ifdef __KERNEL__
#include
-#include
-#include
-struct rwsem_waiter;
-
-extern struct rw_semaphore *rwsem_down_read_failed(struct rw_semaphore *sem);
-extern struct rw_semaphore *rwsem_down_write_failed(struct rw_semaphore *sem);
-extern struct rw_semaphore *rwsem_wake(struct rw_semaphore *);
-extern struct rw_semaphore *rwsem_downgrade_wake(struct rw_semaphore *sem);
-
-/*
- * the semaphore definition
- */
-struct rw_semaphore {
- long count;
#define RWSEM_UNLOCKED_VALUE 0x0000000000000000L
#define RWSEM_ACTIVE_BIAS 0x0000000000000001L
#define RWSEM_ACTIVE_MASK 0x00000000ffffffffL
#define RWSEM_WAITING_BIAS (-0x0000000100000000L)
#define RWSEM_ACTIVE_READ_BIAS RWSEM_ACTIVE_BIAS
#define RWSEM_ACTIVE_WRITE_BIAS (RWSEM_WAITING_BIAS + RWSEM_ACTIVE_BIAS)
- spinlock_t wait_lock;
- struct list_head wait_list;
-};
-
-#define __RWSEM_INITIALIZER(name) \
- { RWSEM_UNLOCKED_VALUE, SPIN_LOCK_UNLOCKED, \
- LIST_HEAD_INIT((name).wait_list) }
-
-#define DECLARE_RWSEM(name) \
- struct rw_semaphore name = __RWSEM_INITIALIZER(name)
-
-static inline void init_rwsem(struct rw_semaphore *sem)
-{
- sem->count = RWSEM_UNLOCKED_VALUE;
- spin_lock_init(&sem->wait_lock);
- INIT_LIST_HEAD(&sem->wait_list);
-}
static inline void __down_read(struct rw_semaphore *sem)
{
@@ -250,10 +219,5 @@ static inline long rwsem_atomic_update(long val, struct rw_semaphore *sem)
#endif
}
-static inline int rwsem_is_locked(struct rw_semaphore *sem)
-{
- return (sem->count != 0);
-}
-
#endif /* __KERNEL__ */
#endif /* _ALPHA_RWSEM_H */
diff --git a/trunk/arch/alpha/kernel/irq.c b/trunk/arch/alpha/kernel/irq.c
index 9ab234f48dd8..a19d60082299 100644
--- a/trunk/arch/alpha/kernel/irq.c
+++ b/trunk/arch/alpha/kernel/irq.c
@@ -44,11 +44,16 @@ static char irq_user_affinity[NR_IRQS];
int irq_select_affinity(unsigned int irq)
{
- struct irq_desc *desc = irq_to_desc[irq];
+ struct irq_data *data = irq_get_irq_data(irq);
+ struct irq_chip *chip;
static int last_cpu;
int cpu = last_cpu + 1;
- if (!desc || !get_irq_desc_chip(desc)->set_affinity || irq_user_affinity[irq])
+ if (!data)
+ return 1;
+ chip = irq_data_get_irq_chip(data);
+
+ if (!chip->irq_set_affinity || irq_user_affinity[irq])
return 1;
while (!cpu_possible(cpu) ||
@@ -56,8 +61,8 @@ int irq_select_affinity(unsigned int irq)
cpu = (cpu < (NR_CPUS-1) ? cpu + 1 : 0);
last_cpu = cpu;
- cpumask_copy(desc->affinity, cpumask_of(cpu));
- get_irq_desc_chip(desc)->set_affinity(irq, cpumask_of(cpu));
+ cpumask_copy(data->affinity, cpumask_of(cpu));
+ chip->irq_set_affinity(data, cpumask_of(cpu), false);
return 0;
}
#endif /* CONFIG_SMP */
diff --git a/trunk/arch/alpha/kernel/irq_alpha.c b/trunk/arch/alpha/kernel/irq_alpha.c
index 2d0679b60939..411ca11d0a18 100644
--- a/trunk/arch/alpha/kernel/irq_alpha.c
+++ b/trunk/arch/alpha/kernel/irq_alpha.c
@@ -228,14 +228,9 @@ struct irqaction timer_irqaction = {
void __init
init_rtc_irq(void)
{
- struct irq_desc *desc = irq_to_desc(RTC_IRQ);
-
- if (desc) {
- desc->status |= IRQ_DISABLED;
- set_irq_chip_and_handler_name(RTC_IRQ, &no_irq_chip,
- handle_simple_irq, "RTC");
- setup_irq(RTC_IRQ, &timer_irqaction);
- }
+ set_irq_chip_and_handler_name(RTC_IRQ, &no_irq_chip,
+ handle_simple_irq, "RTC");
+ setup_irq(RTC_IRQ, &timer_irqaction);
}
/* Dummy irqactions. */
diff --git a/trunk/arch/alpha/kernel/irq_i8259.c b/trunk/arch/alpha/kernel/irq_i8259.c
index 956ea0ed1694..c7cc9813e45f 100644
--- a/trunk/arch/alpha/kernel/irq_i8259.c
+++ b/trunk/arch/alpha/kernel/irq_i8259.c
@@ -33,10 +33,10 @@ i8259_update_irq_hw(unsigned int irq, unsigned long mask)
}
inline void
-i8259a_enable_irq(unsigned int irq)
+i8259a_enable_irq(struct irq_data *d)
{
spin_lock(&i8259_irq_lock);
- i8259_update_irq_hw(irq, cached_irq_mask &= ~(1 << irq));
+ i8259_update_irq_hw(d->irq, cached_irq_mask &= ~(1 << d->irq));
spin_unlock(&i8259_irq_lock);
}
@@ -47,16 +47,18 @@ __i8259a_disable_irq(unsigned int irq)
}
void
-i8259a_disable_irq(unsigned int irq)
+i8259a_disable_irq(struct irq_data *d)
{
spin_lock(&i8259_irq_lock);
- __i8259a_disable_irq(irq);
+ __i8259a_disable_irq(d->irq);
spin_unlock(&i8259_irq_lock);
}
void
-i8259a_mask_and_ack_irq(unsigned int irq)
+i8259a_mask_and_ack_irq(struct irq_data *d)
{
+ unsigned int irq = d->irq;
+
spin_lock(&i8259_irq_lock);
__i8259a_disable_irq(irq);
@@ -71,9 +73,9 @@ i8259a_mask_and_ack_irq(unsigned int irq)
struct irq_chip i8259a_irq_type = {
.name = "XT-PIC",
- .unmask = i8259a_enable_irq,
- .mask = i8259a_disable_irq,
- .mask_ack = i8259a_mask_and_ack_irq,
+ .irq_unmask = i8259a_enable_irq,
+ .irq_mask = i8259a_disable_irq,
+ .irq_mask_ack = i8259a_mask_and_ack_irq,
};
void __init
diff --git a/trunk/arch/alpha/kernel/irq_impl.h b/trunk/arch/alpha/kernel/irq_impl.h
index b63ccd7386f1..d507a234b05d 100644
--- a/trunk/arch/alpha/kernel/irq_impl.h
+++ b/trunk/arch/alpha/kernel/irq_impl.h
@@ -31,11 +31,9 @@ extern void init_rtc_irq(void);
extern void common_init_isa_dma(void);
-extern void i8259a_enable_irq(unsigned int);
-extern void i8259a_disable_irq(unsigned int);
-extern void i8259a_mask_and_ack_irq(unsigned int);
-extern unsigned int i8259a_startup_irq(unsigned int);
-extern void i8259a_end_irq(unsigned int);
+extern void i8259a_enable_irq(struct irq_data *d);
+extern void i8259a_disable_irq(struct irq_data *d);
+extern void i8259a_mask_and_ack_irq(struct irq_data *d);
extern struct irq_chip i8259a_irq_type;
extern void init_i8259a_irqs(void);
diff --git a/trunk/arch/alpha/kernel/irq_pyxis.c b/trunk/arch/alpha/kernel/irq_pyxis.c
index 2863458c853e..b30227fa7f5f 100644
--- a/trunk/arch/alpha/kernel/irq_pyxis.c
+++ b/trunk/arch/alpha/kernel/irq_pyxis.c
@@ -29,21 +29,21 @@ pyxis_update_irq_hw(unsigned long mask)
}
static inline void
-pyxis_enable_irq(unsigned int irq)
+pyxis_enable_irq(struct irq_data *d)
{
- pyxis_update_irq_hw(cached_irq_mask |= 1UL << (irq - 16));
+ pyxis_update_irq_hw(cached_irq_mask |= 1UL << (d->irq - 16));
}
static void
-pyxis_disable_irq(unsigned int irq)
+pyxis_disable_irq(struct irq_data *d)
{
- pyxis_update_irq_hw(cached_irq_mask &= ~(1UL << (irq - 16)));
+ pyxis_update_irq_hw(cached_irq_mask &= ~(1UL << (d->irq - 16)));
}
static void
-pyxis_mask_and_ack_irq(unsigned int irq)
+pyxis_mask_and_ack_irq(struct irq_data *d)
{
- unsigned long bit = 1UL << (irq - 16);
+ unsigned long bit = 1UL << (d->irq - 16);
unsigned long mask = cached_irq_mask &= ~bit;
/* Disable the interrupt. */
@@ -58,9 +58,9 @@ pyxis_mask_and_ack_irq(unsigned int irq)
static struct irq_chip pyxis_irq_type = {
.name = "PYXIS",
- .mask_ack = pyxis_mask_and_ack_irq,
- .mask = pyxis_disable_irq,
- .unmask = pyxis_enable_irq,
+ .irq_mask_ack = pyxis_mask_and_ack_irq,
+ .irq_mask = pyxis_disable_irq,
+ .irq_unmask = pyxis_enable_irq,
};
void
@@ -103,7 +103,7 @@ init_pyxis_irqs(unsigned long ignore_mask)
if ((ignore_mask >> i) & 1)
continue;
set_irq_chip_and_handler(i, &pyxis_irq_type, handle_level_irq);
- irq_to_desc(i)->status |= IRQ_LEVEL;
+ irq_set_status_flags(i, IRQ_LEVEL);
}
setup_irq(16+7, &isa_cascade_irqaction);
diff --git a/trunk/arch/alpha/kernel/irq_srm.c b/trunk/arch/alpha/kernel/irq_srm.c
index 0e57e828b413..82a47bba41c4 100644
--- a/trunk/arch/alpha/kernel/irq_srm.c
+++ b/trunk/arch/alpha/kernel/irq_srm.c
@@ -18,27 +18,27 @@
DEFINE_SPINLOCK(srm_irq_lock);
static inline void
-srm_enable_irq(unsigned int irq)
+srm_enable_irq(struct irq_data *d)
{
spin_lock(&srm_irq_lock);
- cserve_ena(irq - 16);
+ cserve_ena(d->irq - 16);
spin_unlock(&srm_irq_lock);
}
static void
-srm_disable_irq(unsigned int irq)
+srm_disable_irq(struct irq_data *d)
{
spin_lock(&srm_irq_lock);
- cserve_dis(irq - 16);
+ cserve_dis(d->irq - 16);
spin_unlock(&srm_irq_lock);
}
/* Handle interrupts from the SRM, assuming no additional weirdness. */
static struct irq_chip srm_irq_type = {
.name = "SRM",
- .unmask = srm_enable_irq,
- .mask = srm_disable_irq,
- .mask_ack = srm_disable_irq,
+ .irq_unmask = srm_enable_irq,
+ .irq_mask = srm_disable_irq,
+ .irq_mask_ack = srm_disable_irq,
};
void __init
@@ -52,7 +52,7 @@ init_srm_irqs(long max, unsigned long ignore_mask)
if (i < 64 && ((ignore_mask >> i) & 1))
continue;
set_irq_chip_and_handler(i, &srm_irq_type, handle_level_irq);
- irq_to_desc(i)->status |= IRQ_LEVEL;
+ irq_set_status_flags(i, IRQ_LEVEL);
}
}
diff --git a/trunk/arch/alpha/kernel/osf_sys.c b/trunk/arch/alpha/kernel/osf_sys.c
index fe698b5045e9..376f22130791 100644
--- a/trunk/arch/alpha/kernel/osf_sys.c
+++ b/trunk/arch/alpha/kernel/osf_sys.c
@@ -230,44 +230,24 @@ linux_to_osf_statfs(struct kstatfs *linux_stat, struct osf_statfs __user *osf_st
return copy_to_user(osf_stat, &tmp_stat, bufsiz) ? -EFAULT : 0;
}
-static int
-do_osf_statfs(struct path *path, struct osf_statfs __user *buffer,
- unsigned long bufsiz)
+SYSCALL_DEFINE3(osf_statfs, const char __user *, pathname,
+ struct osf_statfs __user *, buffer, unsigned long, bufsiz)
{
struct kstatfs linux_stat;
- int error = vfs_statfs(path, &linux_stat);
+ int error = user_statfs(pathname, &linux_stat);
if (!error)
error = linux_to_osf_statfs(&linux_stat, buffer, bufsiz);
return error;
}
-SYSCALL_DEFINE3(osf_statfs, const char __user *, pathname,
- struct osf_statfs __user *, buffer, unsigned long, bufsiz)
-{
- struct path path;
- int retval;
-
- retval = user_path(pathname, &path);
- if (!retval) {
- retval = do_osf_statfs(&path, buffer, bufsiz);
- path_put(&path);
- }
- return retval;
-}
-
SYSCALL_DEFINE3(osf_fstatfs, unsigned long, fd,
struct osf_statfs __user *, buffer, unsigned long, bufsiz)
{
- struct file *file;
- int retval;
-
- retval = -EBADF;
- file = fget(fd);
- if (file) {
- retval = do_osf_statfs(&file->f_path, buffer, bufsiz);
- fput(file);
- }
- return retval;
+ struct kstatfs linux_stat;
+ int error = fd_statfs(fd, &linux_stat);
+ if (!error)
+ error = linux_to_osf_statfs(&linux_stat, buffer, bufsiz);
+ return error;
}
/*
diff --git a/trunk/arch/alpha/kernel/sys_alcor.c b/trunk/arch/alpha/kernel/sys_alcor.c
index 7bef61768236..88d95e872f55 100644
--- a/trunk/arch/alpha/kernel/sys_alcor.c
+++ b/trunk/arch/alpha/kernel/sys_alcor.c
@@ -44,31 +44,31 @@ alcor_update_irq_hw(unsigned long mask)
}
static inline void
-alcor_enable_irq(unsigned int irq)
+alcor_enable_irq(struct irq_data *d)
{
- alcor_update_irq_hw(cached_irq_mask |= 1UL << (irq - 16));
+ alcor_update_irq_hw(cached_irq_mask |= 1UL << (d->irq - 16));
}
static void
-alcor_disable_irq(unsigned int irq)
+alcor_disable_irq(struct irq_data *d)
{
- alcor_update_irq_hw(cached_irq_mask &= ~(1UL << (irq - 16)));
+ alcor_update_irq_hw(cached_irq_mask &= ~(1UL << (d->irq - 16)));
}
static void
-alcor_mask_and_ack_irq(unsigned int irq)
+alcor_mask_and_ack_irq(struct irq_data *d)
{
- alcor_disable_irq(irq);
+ alcor_disable_irq(d);
/* On ALCOR/XLT, need to dismiss interrupt via GRU. */
- *(vuip)GRU_INT_CLEAR = 1 << (irq - 16); mb();
+ *(vuip)GRU_INT_CLEAR = 1 << (d->irq - 16); mb();
*(vuip)GRU_INT_CLEAR = 0; mb();
}
static void
-alcor_isa_mask_and_ack_irq(unsigned int irq)
+alcor_isa_mask_and_ack_irq(struct irq_data *d)
{
- i8259a_mask_and_ack_irq(irq);
+ i8259a_mask_and_ack_irq(d);
/* On ALCOR/XLT, need to dismiss interrupt via GRU. */
*(vuip)GRU_INT_CLEAR = 0x80000000; mb();
@@ -77,9 +77,9 @@ alcor_isa_mask_and_ack_irq(unsigned int irq)
static struct irq_chip alcor_irq_type = {
.name = "ALCOR",
- .unmask = alcor_enable_irq,
- .mask = alcor_disable_irq,
- .mask_ack = alcor_mask_and_ack_irq,
+ .irq_unmask = alcor_enable_irq,
+ .irq_mask = alcor_disable_irq,
+ .irq_mask_ack = alcor_mask_and_ack_irq,
};
static void
@@ -126,9 +126,9 @@ alcor_init_irq(void)
if (i >= 16+20 && i <= 16+30)
continue;
set_irq_chip_and_handler(i, &alcor_irq_type, handle_level_irq);
- irq_to_desc(i)->status |= IRQ_LEVEL;
+ irq_set_status_flags(i, IRQ_LEVEL);
}
- i8259a_irq_type.ack = alcor_isa_mask_and_ack_irq;
+ i8259a_irq_type.irq_ack = alcor_isa_mask_and_ack_irq;
init_i8259a_irqs();
common_init_isa_dma();
diff --git a/trunk/arch/alpha/kernel/sys_cabriolet.c b/trunk/arch/alpha/kernel/sys_cabriolet.c
index b0c916493aea..57eb6307bc27 100644
--- a/trunk/arch/alpha/kernel/sys_cabriolet.c
+++ b/trunk/arch/alpha/kernel/sys_cabriolet.c
@@ -46,22 +46,22 @@ cabriolet_update_irq_hw(unsigned int irq, unsigned long mask)
}
static inline void
-cabriolet_enable_irq(unsigned int irq)
+cabriolet_enable_irq(struct irq_data *d)
{
- cabriolet_update_irq_hw(irq, cached_irq_mask &= ~(1UL << irq));
+ cabriolet_update_irq_hw(d->irq, cached_irq_mask &= ~(1UL << d->irq));
}
static void
-cabriolet_disable_irq(unsigned int irq)
+cabriolet_disable_irq(struct irq_data *d)
{
- cabriolet_update_irq_hw(irq, cached_irq_mask |= 1UL << irq);
+ cabriolet_update_irq_hw(d->irq, cached_irq_mask |= 1UL << d->irq);
}
static struct irq_chip cabriolet_irq_type = {
.name = "CABRIOLET",
- .unmask = cabriolet_enable_irq,
- .mask = cabriolet_disable_irq,
- .mask_ack = cabriolet_disable_irq,
+ .irq_unmask = cabriolet_enable_irq,
+ .irq_mask = cabriolet_disable_irq,
+ .irq_mask_ack = cabriolet_disable_irq,
};
static void
@@ -107,7 +107,7 @@ common_init_irq(void (*srm_dev_int)(unsigned long v))
for (i = 16; i < 35; ++i) {
set_irq_chip_and_handler(i, &cabriolet_irq_type,
handle_level_irq);
- irq_to_desc(i)->status |= IRQ_LEVEL;
+ irq_set_status_flags(i, IRQ_LEVEL);
}
}
diff --git a/trunk/arch/alpha/kernel/sys_dp264.c b/trunk/arch/alpha/kernel/sys_dp264.c
index edad5f759ccd..481df4ecb651 100644
--- a/trunk/arch/alpha/kernel/sys_dp264.c
+++ b/trunk/arch/alpha/kernel/sys_dp264.c
@@ -98,37 +98,37 @@ tsunami_update_irq_hw(unsigned long mask)
}
static void
-dp264_enable_irq(unsigned int irq)
+dp264_enable_irq(struct irq_data *d)
{
spin_lock(&dp264_irq_lock);
- cached_irq_mask |= 1UL << irq;
+ cached_irq_mask |= 1UL << d->irq;
tsunami_update_irq_hw(cached_irq_mask);
spin_unlock(&dp264_irq_lock);
}
static void
-dp264_disable_irq(unsigned int irq)
+dp264_disable_irq(struct irq_data *d)
{
spin_lock(&dp264_irq_lock);
- cached_irq_mask &= ~(1UL << irq);
+ cached_irq_mask &= ~(1UL << d->irq);
tsunami_update_irq_hw(cached_irq_mask);
spin_unlock(&dp264_irq_lock);
}
static void
-clipper_enable_irq(unsigned int irq)
+clipper_enable_irq(struct irq_data *d)
{
spin_lock(&dp264_irq_lock);
- cached_irq_mask |= 1UL << (irq - 16);
+ cached_irq_mask |= 1UL << (d->irq - 16);
tsunami_update_irq_hw(cached_irq_mask);
spin_unlock(&dp264_irq_lock);
}
static void
-clipper_disable_irq(unsigned int irq)
+clipper_disable_irq(struct irq_data *d)
{
spin_lock(&dp264_irq_lock);
- cached_irq_mask &= ~(1UL << (irq - 16));
+ cached_irq_mask &= ~(1UL << (d->irq - 16));
tsunami_update_irq_hw(cached_irq_mask);
spin_unlock(&dp264_irq_lock);
}
@@ -149,10 +149,11 @@ cpu_set_irq_affinity(unsigned int irq, cpumask_t affinity)
}
static int
-dp264_set_affinity(unsigned int irq, const struct cpumask *affinity)
-{
+dp264_set_affinity(struct irq_data *d, const struct cpumask *affinity,
+ bool force)
+{
spin_lock(&dp264_irq_lock);
- cpu_set_irq_affinity(irq, *affinity);
+ cpu_set_irq_affinity(d->irq, *affinity);
tsunami_update_irq_hw(cached_irq_mask);
spin_unlock(&dp264_irq_lock);
@@ -160,10 +161,11 @@ dp264_set_affinity(unsigned int irq, const struct cpumask *affinity)
}
static int
-clipper_set_affinity(unsigned int irq, const struct cpumask *affinity)
-{
+clipper_set_affinity(struct irq_data *d, const struct cpumask *affinity,
+ bool force)
+{
spin_lock(&dp264_irq_lock);
- cpu_set_irq_affinity(irq - 16, *affinity);
+ cpu_set_irq_affinity(d->irq - 16, *affinity);
tsunami_update_irq_hw(cached_irq_mask);
spin_unlock(&dp264_irq_lock);
@@ -171,19 +173,19 @@ clipper_set_affinity(unsigned int irq, const struct cpumask *affinity)
}
static struct irq_chip dp264_irq_type = {
- .name = "DP264",
- .unmask = dp264_enable_irq,
- .mask = dp264_disable_irq,
- .mask_ack = dp264_disable_irq,
- .set_affinity = dp264_set_affinity,
+ .name = "DP264",
+ .irq_unmask = dp264_enable_irq,
+ .irq_mask = dp264_disable_irq,
+ .irq_mask_ack = dp264_disable_irq,
+ .irq_set_affinity = dp264_set_affinity,
};
static struct irq_chip clipper_irq_type = {
- .name = "CLIPPER",
- .unmask = clipper_enable_irq,
- .mask = clipper_disable_irq,
- .mask_ack = clipper_disable_irq,
- .set_affinity = clipper_set_affinity,
+ .name = "CLIPPER",
+ .irq_unmask = clipper_enable_irq,
+ .irq_mask = clipper_disable_irq,
+ .irq_mask_ack = clipper_disable_irq,
+ .irq_set_affinity = clipper_set_affinity,
};
static void
@@ -268,8 +270,8 @@ init_tsunami_irqs(struct irq_chip * ops, int imin, int imax)
{
long i;
for (i = imin; i <= imax; ++i) {
- irq_to_desc(i)->status |= IRQ_LEVEL;
set_irq_chip_and_handler(i, ops, handle_level_irq);
+ irq_set_status_flags(i, IRQ_LEVEL);
}
}
diff --git a/trunk/arch/alpha/kernel/sys_eb64p.c b/trunk/arch/alpha/kernel/sys_eb64p.c
index ae5f29d127b0..402e908ffb3e 100644
--- a/trunk/arch/alpha/kernel/sys_eb64p.c
+++ b/trunk/arch/alpha/kernel/sys_eb64p.c
@@ -44,22 +44,22 @@ eb64p_update_irq_hw(unsigned int irq, unsigned long mask)
}
static inline void
-eb64p_enable_irq(unsigned int irq)
+eb64p_enable_irq(struct irq_data *d)
{
- eb64p_update_irq_hw(irq, cached_irq_mask &= ~(1 << irq));
+ eb64p_update_irq_hw(d->irq, cached_irq_mask &= ~(1 << d->irq));
}
static void
-eb64p_disable_irq(unsigned int irq)
+eb64p_disable_irq(struct irq_data *d)
{
- eb64p_update_irq_hw(irq, cached_irq_mask |= 1 << irq);
+ eb64p_update_irq_hw(d->irq, cached_irq_mask |= 1 << d->irq);
}
static struct irq_chip eb64p_irq_type = {
.name = "EB64P",
- .unmask = eb64p_enable_irq,
- .mask = eb64p_disable_irq,
- .mask_ack = eb64p_disable_irq,
+ .irq_unmask = eb64p_enable_irq,
+ .irq_mask = eb64p_disable_irq,
+ .irq_mask_ack = eb64p_disable_irq,
};
static void
@@ -118,9 +118,9 @@ eb64p_init_irq(void)
init_i8259a_irqs();
for (i = 16; i < 32; ++i) {
- irq_to_desc(i)->status |= IRQ_LEVEL;
set_irq_chip_and_handler(i, &eb64p_irq_type, handle_level_irq);
- }
+ irq_set_status_flags(i, IRQ_LEVEL);
+ }
common_init_isa_dma();
setup_irq(16+5, &isa_cascade_irqaction);
diff --git a/trunk/arch/alpha/kernel/sys_eiger.c b/trunk/arch/alpha/kernel/sys_eiger.c
index 1121bc5c6c6c..0b44a54c1522 100644
--- a/trunk/arch/alpha/kernel/sys_eiger.c
+++ b/trunk/arch/alpha/kernel/sys_eiger.c
@@ -51,16 +51,18 @@ eiger_update_irq_hw(unsigned long irq, unsigned long mask)
}
static inline void
-eiger_enable_irq(unsigned int irq)
+eiger_enable_irq(struct irq_data *d)
{
+ unsigned int irq = d->irq;
unsigned long mask;
mask = (cached_irq_mask[irq >= 64] &= ~(1UL << (irq & 63)));
eiger_update_irq_hw(irq, mask);
}
static void
-eiger_disable_irq(unsigned int irq)
+eiger_disable_irq(struct irq_data *d)
{
+ unsigned int irq = d->irq;
unsigned long mask;
mask = (cached_irq_mask[irq >= 64] |= 1UL << (irq & 63));
eiger_update_irq_hw(irq, mask);
@@ -68,9 +70,9 @@ eiger_disable_irq(unsigned int irq)
static struct irq_chip eiger_irq_type = {
.name = "EIGER",
- .unmask = eiger_enable_irq,
- .mask = eiger_disable_irq,
- .mask_ack = eiger_disable_irq,
+ .irq_unmask = eiger_enable_irq,
+ .irq_mask = eiger_disable_irq,
+ .irq_mask_ack = eiger_disable_irq,
};
static void
@@ -136,8 +138,8 @@ eiger_init_irq(void)
init_i8259a_irqs();
for (i = 16; i < 128; ++i) {
- irq_to_desc(i)->status |= IRQ_LEVEL;
set_irq_chip_and_handler(i, &eiger_irq_type, handle_level_irq);
+ irq_set_status_flags(i, IRQ_LEVEL);
}
}
diff --git a/trunk/arch/alpha/kernel/sys_jensen.c b/trunk/arch/alpha/kernel/sys_jensen.c
index 34f55e03d331..00341b75c8b2 100644
--- a/trunk/arch/alpha/kernel/sys_jensen.c
+++ b/trunk/arch/alpha/kernel/sys_jensen.c
@@ -63,34 +63,34 @@
*/
static void
-jensen_local_enable(unsigned int irq)
+jensen_local_enable(struct irq_data *d)
{
/* the parport is really hw IRQ 1, silly Jensen. */
- if (irq == 7)
- i8259a_enable_irq(1);
+ if (d->irq == 7)
+ i8259a_enable_irq(d);
}
static void
-jensen_local_disable(unsigned int irq)
+jensen_local_disable(struct irq_data *d)
{
/* the parport is really hw IRQ 1, silly Jensen. */
- if (irq == 7)
- i8259a_disable_irq(1);
+ if (d->irq == 7)
+ i8259a_disable_irq(d);
}
static void
-jensen_local_mask_ack(unsigned int irq)
+jensen_local_mask_ack(struct irq_data *d)
{
/* the parport is really hw IRQ 1, silly Jensen. */
- if (irq == 7)
- i8259a_mask_and_ack_irq(1);
+ if (d->irq == 7)
+ i8259a_mask_and_ack_irq(d);
}
static struct irq_chip jensen_local_irq_type = {
.name = "LOCAL",
- .unmask = jensen_local_enable,
- .mask = jensen_local_disable,
- .mask_ack = jensen_local_mask_ack,
+ .irq_unmask = jensen_local_enable,
+ .irq_mask = jensen_local_disable,
+ .irq_mask_ack = jensen_local_mask_ack,
};
static void
diff --git a/trunk/arch/alpha/kernel/sys_marvel.c b/trunk/arch/alpha/kernel/sys_marvel.c
index 2bfc9f1b1ddc..e61910734e41 100644
--- a/trunk/arch/alpha/kernel/sys_marvel.c
+++ b/trunk/arch/alpha/kernel/sys_marvel.c
@@ -104,9 +104,10 @@ io7_get_irq_ctl(unsigned int irq, struct io7 **pio7)
}
static void
-io7_enable_irq(unsigned int irq)
+io7_enable_irq(struct irq_data *d)
{
volatile unsigned long *ctl;
+ unsigned int irq = d->irq;
struct io7 *io7;
ctl = io7_get_irq_ctl(irq, &io7);
@@ -115,7 +116,7 @@ io7_enable_irq(unsigned int irq)
__func__, irq);
return;
}
-
+
spin_lock(&io7->irq_lock);
*ctl |= 1UL << 24;
mb();
@@ -124,9 +125,10 @@ io7_enable_irq(unsigned int irq)
}
static void
-io7_disable_irq(unsigned int irq)
+io7_disable_irq(struct irq_data *d)
{
volatile unsigned long *ctl;
+ unsigned int irq = d->irq;
struct io7 *io7;
ctl = io7_get_irq_ctl(irq, &io7);
@@ -135,7 +137,7 @@ io7_disable_irq(unsigned int irq)
__func__, irq);
return;
}
-
+
spin_lock(&io7->irq_lock);
*ctl &= ~(1UL << 24);
mb();
@@ -144,35 +146,29 @@ io7_disable_irq(unsigned int irq)
}
static void
-marvel_irq_noop(unsigned int irq)
-{
- return;
-}
-
-static unsigned int
-marvel_irq_noop_return(unsigned int irq)
-{
- return 0;
+marvel_irq_noop(struct irq_data *d)
+{
+ return;
}
static struct irq_chip marvel_legacy_irq_type = {
.name = "LEGACY",
- .mask = marvel_irq_noop,
- .unmask = marvel_irq_noop,
+ .irq_mask = marvel_irq_noop,
+ .irq_unmask = marvel_irq_noop,
};
static struct irq_chip io7_lsi_irq_type = {
.name = "LSI",
- .unmask = io7_enable_irq,
- .mask = io7_disable_irq,
- .mask_ack = io7_disable_irq,
+ .irq_unmask = io7_enable_irq,
+ .irq_mask = io7_disable_irq,
+ .irq_mask_ack = io7_disable_irq,
};
static struct irq_chip io7_msi_irq_type = {
.name = "MSI",
- .unmask = io7_enable_irq,
- .mask = io7_disable_irq,
- .ack = marvel_irq_noop,
+ .irq_unmask = io7_enable_irq,
+ .irq_mask = io7_disable_irq,
+ .irq_ack = marvel_irq_noop,
};
static void
@@ -280,8 +276,8 @@ init_io7_irqs(struct io7 *io7,
/* Set up the lsi irqs. */
for (i = 0; i < 128; ++i) {
- irq_to_desc(base + i)->status |= IRQ_LEVEL;
set_irq_chip_and_handler(base + i, lsi_ops, handle_level_irq);
+ irq_set_status_flags(i, IRQ_LEVEL);
}
/* Disable the implemented irqs in hardware. */
@@ -294,8 +290,8 @@ init_io7_irqs(struct io7 *io7,
/* Set up the msi irqs. */
for (i = 128; i < (128 + 512); ++i) {
- irq_to_desc(base + i)->status |= IRQ_LEVEL;
set_irq_chip_and_handler(base + i, msi_ops, handle_level_irq);
+ irq_set_status_flags(i, IRQ_LEVEL);
}
for (i = 0; i < 16; ++i)
diff --git a/trunk/arch/alpha/kernel/sys_mikasa.c b/trunk/arch/alpha/kernel/sys_mikasa.c
index bcc1639e8efb..cf7f43dd3147 100644
--- a/trunk/arch/alpha/kernel/sys_mikasa.c
+++ b/trunk/arch/alpha/kernel/sys_mikasa.c
@@ -43,22 +43,22 @@ mikasa_update_irq_hw(int mask)
}
static inline void
-mikasa_enable_irq(unsigned int irq)
+mikasa_enable_irq(struct irq_data *d)
{
- mikasa_update_irq_hw(cached_irq_mask |= 1 << (irq - 16));
+ mikasa_update_irq_hw(cached_irq_mask |= 1 << (d->irq - 16));
}
static void
-mikasa_disable_irq(unsigned int irq)
+mikasa_disable_irq(struct irq_data *d)
{
- mikasa_update_irq_hw(cached_irq_mask &= ~(1 << (irq - 16)));
+ mikasa_update_irq_hw(cached_irq_mask &= ~(1 << (d->irq - 16)));
}
static struct irq_chip mikasa_irq_type = {
.name = "MIKASA",
- .unmask = mikasa_enable_irq,
- .mask = mikasa_disable_irq,
- .mask_ack = mikasa_disable_irq,
+ .irq_unmask = mikasa_enable_irq,
+ .irq_mask = mikasa_disable_irq,
+ .irq_mask_ack = mikasa_disable_irq,
};
static void
@@ -98,8 +98,8 @@ mikasa_init_irq(void)
mikasa_update_irq_hw(0);
for (i = 16; i < 32; ++i) {
- irq_to_desc(i)->status |= IRQ_LEVEL;
set_irq_chip_and_handler(i, &mikasa_irq_type, handle_level_irq);
+ irq_set_status_flags(i, IRQ_LEVEL);
}
init_i8259a_irqs();
diff --git a/trunk/arch/alpha/kernel/sys_noritake.c b/trunk/arch/alpha/kernel/sys_noritake.c
index e88f4ae1260e..92bc188e94a9 100644
--- a/trunk/arch/alpha/kernel/sys_noritake.c
+++ b/trunk/arch/alpha/kernel/sys_noritake.c
@@ -48,22 +48,22 @@ noritake_update_irq_hw(int irq, int mask)
}
static void
-noritake_enable_irq(unsigned int irq)
+noritake_enable_irq(struct irq_data *d)
{
- noritake_update_irq_hw(irq, cached_irq_mask |= 1 << (irq - 16));
+ noritake_update_irq_hw(d->irq, cached_irq_mask |= 1 << (d->irq - 16));
}
static void
-noritake_disable_irq(unsigned int irq)
+noritake_disable_irq(struct irq_data *d)
{
- noritake_update_irq_hw(irq, cached_irq_mask &= ~(1 << (irq - 16)));
+ noritake_update_irq_hw(d->irq, cached_irq_mask &= ~(1 << (d->irq - 16)));
}
static struct irq_chip noritake_irq_type = {
.name = "NORITAKE",
- .unmask = noritake_enable_irq,
- .mask = noritake_disable_irq,
- .mask_ack = noritake_disable_irq,
+ .irq_unmask = noritake_enable_irq,
+ .irq_mask = noritake_disable_irq,
+ .irq_mask_ack = noritake_disable_irq,
};
static void
@@ -127,8 +127,8 @@ noritake_init_irq(void)
outw(0, 0x54c);
for (i = 16; i < 48; ++i) {
- irq_to_desc(i)->status |= IRQ_LEVEL;
set_irq_chip_and_handler(i, &noritake_irq_type, handle_level_irq);
+ irq_set_status_flags(i, IRQ_LEVEL);
}
init_i8259a_irqs();
diff --git a/trunk/arch/alpha/kernel/sys_rawhide.c b/trunk/arch/alpha/kernel/sys_rawhide.c
index 6a51364dd1cc..936d4140ed5f 100644
--- a/trunk/arch/alpha/kernel/sys_rawhide.c
+++ b/trunk/arch/alpha/kernel/sys_rawhide.c
@@ -56,9 +56,10 @@ rawhide_update_irq_hw(int hose, int mask)
(((h) < MCPCIA_MAX_HOSES) && (cached_irq_masks[(h)] != 0))
static inline void
-rawhide_enable_irq(unsigned int irq)
+rawhide_enable_irq(struct irq_data *d)
{
unsigned int mask, hose;
+ unsigned int irq = d->irq;
irq -= 16;
hose = irq / 24;
@@ -76,9 +77,10 @@ rawhide_enable_irq(unsigned int irq)
}
static void
-rawhide_disable_irq(unsigned int irq)
+rawhide_disable_irq(struct irq_data *d)
{
unsigned int mask, hose;
+ unsigned int irq = d->irq;
irq -= 16;
hose = irq / 24;
@@ -96,9 +98,10 @@ rawhide_disable_irq(unsigned int irq)
}
static void
-rawhide_mask_and_ack_irq(unsigned int irq)
+rawhide_mask_and_ack_irq(struct irq_data *d)
{
unsigned int mask, mask1, hose;
+ unsigned int irq = d->irq;
irq -= 16;
hose = irq / 24;
@@ -123,9 +126,9 @@ rawhide_mask_and_ack_irq(unsigned int irq)
static struct irq_chip rawhide_irq_type = {
.name = "RAWHIDE",
- .unmask = rawhide_enable_irq,
- .mask = rawhide_disable_irq,
- .mask_ack = rawhide_mask_and_ack_irq,
+ .irq_unmask = rawhide_enable_irq,
+ .irq_mask = rawhide_disable_irq,
+ .irq_mask_ack = rawhide_mask_and_ack_irq,
};
static void
@@ -177,8 +180,8 @@ rawhide_init_irq(void)
}
for (i = 16; i < 128; ++i) {
- irq_to_desc(i)->status |= IRQ_LEVEL;
set_irq_chip_and_handler(i, &rawhide_irq_type, handle_level_irq);
+ irq_set_status_flags(i, IRQ_LEVEL);
}
init_i8259a_irqs();
diff --git a/trunk/arch/alpha/kernel/sys_rx164.c b/trunk/arch/alpha/kernel/sys_rx164.c
index 89e7e37ec84c..cea22a62913b 100644
--- a/trunk/arch/alpha/kernel/sys_rx164.c
+++ b/trunk/arch/alpha/kernel/sys_rx164.c
@@ -47,22 +47,22 @@ rx164_update_irq_hw(unsigned long mask)
}
static inline void
-rx164_enable_irq(unsigned int irq)
+rx164_enable_irq(struct irq_data *d)
{
- rx164_update_irq_hw(cached_irq_mask |= 1UL << (irq - 16));
+ rx164_update_irq_hw(cached_irq_mask |= 1UL << (d->irq - 16));
}
static void
-rx164_disable_irq(unsigned int irq)
+rx164_disable_irq(struct irq_data *d)
{
- rx164_update_irq_hw(cached_irq_mask &= ~(1UL << (irq - 16)));
+ rx164_update_irq_hw(cached_irq_mask &= ~(1UL << (d->irq - 16)));
}
static struct irq_chip rx164_irq_type = {
.name = "RX164",
- .unmask = rx164_enable_irq,
- .mask = rx164_disable_irq,
- .mask_ack = rx164_disable_irq,
+ .irq_unmask = rx164_enable_irq,
+ .irq_mask = rx164_disable_irq,
+ .irq_mask_ack = rx164_disable_irq,
};
static void
@@ -99,8 +99,8 @@ rx164_init_irq(void)
rx164_update_irq_hw(0);
for (i = 16; i < 40; ++i) {
- irq_to_desc(i)->status |= IRQ_LEVEL;
set_irq_chip_and_handler(i, &rx164_irq_type, handle_level_irq);
+ irq_set_status_flags(i, IRQ_LEVEL);
}
init_i8259a_irqs();
diff --git a/trunk/arch/alpha/kernel/sys_sable.c b/trunk/arch/alpha/kernel/sys_sable.c
index 5c4423d1b06c..a349538aabc9 100644
--- a/trunk/arch/alpha/kernel/sys_sable.c
+++ b/trunk/arch/alpha/kernel/sys_sable.c
@@ -443,11 +443,11 @@ lynx_swizzle(struct pci_dev *dev, u8 *pinp)
/* GENERIC irq routines */
static inline void
-sable_lynx_enable_irq(unsigned int irq)
+sable_lynx_enable_irq(struct irq_data *d)
{
unsigned long bit, mask;
- bit = sable_lynx_irq_swizzle->irq_to_mask[irq];
+ bit = sable_lynx_irq_swizzle->irq_to_mask[d->irq];
spin_lock(&sable_lynx_irq_lock);
mask = sable_lynx_irq_swizzle->shadow_mask &= ~(1UL << bit);
sable_lynx_irq_swizzle->update_irq_hw(bit, mask);
@@ -459,11 +459,11 @@ sable_lynx_enable_irq(unsigned int irq)
}
static void
-sable_lynx_disable_irq(unsigned int irq)
+sable_lynx_disable_irq(struct irq_data *d)
{
unsigned long bit, mask;
- bit = sable_lynx_irq_swizzle->irq_to_mask[irq];
+ bit = sable_lynx_irq_swizzle->irq_to_mask[d->irq];
spin_lock(&sable_lynx_irq_lock);
mask = sable_lynx_irq_swizzle->shadow_mask |= 1UL << bit;
sable_lynx_irq_swizzle->update_irq_hw(bit, mask);
@@ -475,11 +475,11 @@ sable_lynx_disable_irq(unsigned int irq)
}
static void
-sable_lynx_mask_and_ack_irq(unsigned int irq)
+sable_lynx_mask_and_ack_irq(struct irq_data *d)
{
unsigned long bit, mask;
- bit = sable_lynx_irq_swizzle->irq_to_mask[irq];
+ bit = sable_lynx_irq_swizzle->irq_to_mask[d->irq];
spin_lock(&sable_lynx_irq_lock);
mask = sable_lynx_irq_swizzle->shadow_mask |= 1UL << bit;
sable_lynx_irq_swizzle->update_irq_hw(bit, mask);
@@ -489,9 +489,9 @@ sable_lynx_mask_and_ack_irq(unsigned int irq)
static struct irq_chip sable_lynx_irq_type = {
.name = "SABLE/LYNX",
- .unmask = sable_lynx_enable_irq,
- .mask = sable_lynx_disable_irq,
- .mask_ack = sable_lynx_mask_and_ack_irq,
+ .irq_unmask = sable_lynx_enable_irq,
+ .irq_mask = sable_lynx_disable_irq,
+ .irq_mask_ack = sable_lynx_mask_and_ack_irq,
};
static void
@@ -518,9 +518,9 @@ sable_lynx_init_irq(int nr_of_irqs)
long i;
for (i = 0; i < nr_of_irqs; ++i) {
- irq_to_desc(i)->status |= IRQ_LEVEL;
set_irq_chip_and_handler(i, &sable_lynx_irq_type,
handle_level_irq);
+ irq_set_status_flags(i, IRQ_LEVEL);
}
common_init_isa_dma();
diff --git a/trunk/arch/alpha/kernel/sys_takara.c b/trunk/arch/alpha/kernel/sys_takara.c
index f8a1e8a862fb..42a5331f13c4 100644
--- a/trunk/arch/alpha/kernel/sys_takara.c
+++ b/trunk/arch/alpha/kernel/sys_takara.c
@@ -45,16 +45,18 @@ takara_update_irq_hw(unsigned long irq, unsigned long mask)
}
static inline void
-takara_enable_irq(unsigned int irq)
+takara_enable_irq(struct irq_data *d)
{
+ unsigned int irq = d->irq;
unsigned long mask;
mask = (cached_irq_mask[irq >= 64] &= ~(1UL << (irq & 63)));
takara_update_irq_hw(irq, mask);
}
static void
-takara_disable_irq(unsigned int irq)
+takara_disable_irq(struct irq_data *d)
{
+ unsigned int irq = d->irq;
unsigned long mask;
mask = (cached_irq_mask[irq >= 64] |= 1UL << (irq & 63));
takara_update_irq_hw(irq, mask);
@@ -62,9 +64,9 @@ takara_disable_irq(unsigned int irq)
static struct irq_chip takara_irq_type = {
.name = "TAKARA",
- .unmask = takara_enable_irq,
- .mask = takara_disable_irq,
- .mask_ack = takara_disable_irq,
+ .irq_unmask = takara_enable_irq,
+ .irq_mask = takara_disable_irq,
+ .irq_mask_ack = takara_disable_irq,
};
static void
@@ -136,8 +138,8 @@ takara_init_irq(void)
takara_update_irq_hw(i, -1);
for (i = 16; i < 128; ++i) {
- irq_to_desc(i)->status |= IRQ_LEVEL;
set_irq_chip_and_handler(i, &takara_irq_type, handle_level_irq);
+ irq_set_status_flags(i, IRQ_LEVEL);
}
common_init_isa_dma();
diff --git a/trunk/arch/alpha/kernel/sys_titan.c b/trunk/arch/alpha/kernel/sys_titan.c
index e02494bf5ef3..8c13a0c77830 100644
--- a/trunk/arch/alpha/kernel/sys_titan.c
+++ b/trunk/arch/alpha/kernel/sys_titan.c
@@ -112,8 +112,9 @@ titan_update_irq_hw(unsigned long mask)
}
static inline void
-titan_enable_irq(unsigned int irq)
+titan_enable_irq(struct irq_data *d)
{
+ unsigned int irq = d->irq;
spin_lock(&titan_irq_lock);
titan_cached_irq_mask |= 1UL << (irq - 16);
titan_update_irq_hw(titan_cached_irq_mask);
@@ -121,8 +122,9 @@ titan_enable_irq(unsigned int irq)
}
static inline void
-titan_disable_irq(unsigned int irq)
+titan_disable_irq(struct irq_data *d)
{
+ unsigned int irq = d->irq;
spin_lock(&titan_irq_lock);
titan_cached_irq_mask &= ~(1UL << (irq - 16));
titan_update_irq_hw(titan_cached_irq_mask);
@@ -144,8 +146,10 @@ titan_cpu_set_irq_affinity(unsigned int irq, cpumask_t affinity)
}
static int
-titan_set_irq_affinity(unsigned int irq, const struct cpumask *affinity)
+titan_set_irq_affinity(struct irq_data *d, const struct cpumask *affinity,
+ bool force)
{
+ unsigned int irq = d->irq;
spin_lock(&titan_irq_lock);
titan_cpu_set_irq_affinity(irq - 16, *affinity);
titan_update_irq_hw(titan_cached_irq_mask);
@@ -175,17 +179,17 @@ init_titan_irqs(struct irq_chip * ops, int imin, int imax)
{
long i;
for (i = imin; i <= imax; ++i) {
- irq_to_desc(i)->status |= IRQ_LEVEL;
set_irq_chip_and_handler(i, ops, handle_level_irq);
+ irq_set_status_flags(i, IRQ_LEVEL);
}
}
static struct irq_chip titan_irq_type = {
- .name = "TITAN",
- .unmask = titan_enable_irq,
- .mask = titan_disable_irq,
- .mask_ack = titan_disable_irq,
- .set_affinity = titan_set_irq_affinity,
+ .name = "TITAN",
+ .irq_unmask = titan_enable_irq,
+ .irq_mask = titan_disable_irq,
+ .irq_mask_ack = titan_disable_irq,
+ .irq_set_affinity = titan_set_irq_affinity,
};
static irqreturn_t
diff --git a/trunk/arch/alpha/kernel/sys_wildfire.c b/trunk/arch/alpha/kernel/sys_wildfire.c
index eec52594d410..ca60a387ef0a 100644
--- a/trunk/arch/alpha/kernel/sys_wildfire.c
+++ b/trunk/arch/alpha/kernel/sys_wildfire.c
@@ -104,10 +104,12 @@ wildfire_init_irq_hw(void)
}
static void
-wildfire_enable_irq(unsigned int irq)
+wildfire_enable_irq(struct irq_data *d)
{
+ unsigned int irq = d->irq;
+
if (irq < 16)
- i8259a_enable_irq(irq);
+ i8259a_enable_irq(d);
spin_lock(&wildfire_irq_lock);
set_bit(irq, &cached_irq_mask);
@@ -116,10 +118,12 @@ wildfire_enable_irq(unsigned int irq)
}
static void
-wildfire_disable_irq(unsigned int irq)
+wildfire_disable_irq(struct irq_data *d)
{
+ unsigned int irq = d->irq;
+
if (irq < 16)
- i8259a_disable_irq(irq);
+ i8259a_disable_irq(d);
spin_lock(&wildfire_irq_lock);
clear_bit(irq, &cached_irq_mask);
@@ -128,10 +132,12 @@ wildfire_disable_irq(unsigned int irq)
}
static void
-wildfire_mask_and_ack_irq(unsigned int irq)
+wildfire_mask_and_ack_irq(struct irq_data *d)
{
+ unsigned int irq = d->irq;
+
if (irq < 16)
- i8259a_mask_and_ack_irq(irq);
+ i8259a_mask_and_ack_irq(d);
spin_lock(&wildfire_irq_lock);
clear_bit(irq, &cached_irq_mask);
@@ -141,9 +147,9 @@ wildfire_mask_and_ack_irq(unsigned int irq)
static struct irq_chip wildfire_irq_type = {
.name = "WILDFIRE",
- .unmask = wildfire_enable_irq,
- .mask = wildfire_disable_irq,
- .mask_ack = wildfire_mask_and_ack_irq,
+ .irq_unmask = wildfire_enable_irq,
+ .irq_mask = wildfire_disable_irq,
+ .irq_mask_ack = wildfire_mask_and_ack_irq,
};
static void __init
@@ -177,21 +183,21 @@ wildfire_init_irq_per_pca(int qbbno, int pcano)
for (i = 0; i < 16; ++i) {
if (i == 2)
continue;
- irq_to_desc(i+irq_bias)->status |= IRQ_LEVEL;
set_irq_chip_and_handler(i+irq_bias, &wildfire_irq_type,
handle_level_irq);
+ irq_set_status_flags(i + irq_bias, IRQ_LEVEL);
}
- irq_to_desc(36+irq_bias)->status |= IRQ_LEVEL;
set_irq_chip_and_handler(36+irq_bias, &wildfire_irq_type,
handle_level_irq);
+ irq_set_status_flags(36 + irq_bias, IRQ_LEVEL);
for (i = 40; i < 64; ++i) {
- irq_to_desc(i+irq_bias)->status |= IRQ_LEVEL;
set_irq_chip_and_handler(i+irq_bias, &wildfire_irq_type,
handle_level_irq);
+ irq_set_status_flags(i + irq_bias, IRQ_LEVEL);
}
- setup_irq(32+irq_bias, &isa_enable);
+ setup_irq(32+irq_bias, &isa_enable);
}
static void __init
diff --git a/trunk/arch/alpha/kernel/time.c b/trunk/arch/alpha/kernel/time.c
index c1f3e7cb82a4..a58e84f1a63b 100644
--- a/trunk/arch/alpha/kernel/time.c
+++ b/trunk/arch/alpha/kernel/time.c
@@ -159,7 +159,7 @@ void read_persistent_clock(struct timespec *ts)
/*
* timer_interrupt() needs to keep up the real-time clock,
- * as well as call the "do_timer()" routine every clocktick
+ * as well as call the "xtime_update()" routine every clocktick
*/
irqreturn_t timer_interrupt(int irq, void *dev)
{
@@ -172,8 +172,6 @@ irqreturn_t timer_interrupt(int irq, void *dev)
profile_tick(CPU_PROFILING);
#endif
- write_seqlock(&xtime_lock);
-
/*
* Calculate how many ticks have passed since the last update,
* including any previous partial leftover. Save any resulting
@@ -187,9 +185,7 @@ irqreturn_t timer_interrupt(int irq, void *dev)
nticks = delta >> FIX_SHIFT;
if (nticks)
- do_timer(nticks);
-
- write_sequnlock(&xtime_lock);
+ xtime_update(nticks);
if (test_irq_work_pending()) {
clear_irq_work_pending();
diff --git a/trunk/arch/alpha/kernel/vmlinux.lds.S b/trunk/arch/alpha/kernel/vmlinux.lds.S
index 003ef4c02585..433be2a24f31 100644
--- a/trunk/arch/alpha/kernel/vmlinux.lds.S
+++ b/trunk/arch/alpha/kernel/vmlinux.lds.S
@@ -1,5 +1,6 @@
#include
#include
+#include
#include
OUTPUT_FORMAT("elf64-alpha")
@@ -38,7 +39,7 @@ SECTIONS
__init_begin = ALIGN(PAGE_SIZE);
INIT_TEXT_SECTION(PAGE_SIZE)
INIT_DATA_SECTION(16)
- PERCPU(PAGE_SIZE)
+ PERCPU(L1_CACHE_BYTES, PAGE_SIZE)
/* Align to THREAD_SIZE rather than PAGE_SIZE here so any padding page
needed for the THREAD_SIZE aligned init_task gets freed after init */
. = ALIGN(THREAD_SIZE);
@@ -46,7 +47,7 @@ SECTIONS
/* Freed after init ends here */
_data = .;
- RW_DATA_SECTION(64, PAGE_SIZE, THREAD_SIZE)
+ RW_DATA_SECTION(L1_CACHE_BYTES, PAGE_SIZE, THREAD_SIZE)
.got : {
*(.got)
diff --git a/trunk/arch/arm/Kconfig b/trunk/arch/arm/Kconfig
index 26d45e5b636b..166efa2a19cd 100644
--- a/trunk/arch/arm/Kconfig
+++ b/trunk/arch/arm/Kconfig
@@ -1177,6 +1177,31 @@ config ARM_ERRATA_743622
visible impact on the overall performance or power consumption of the
processor.
+config ARM_ERRATA_751472
+ bool "ARM errata: Interrupted ICIALLUIS may prevent completion of broadcasted operation"
+ depends on CPU_V7 && SMP
+ help
+ This option enables the workaround for the 751472 Cortex-A9 (prior
+ to r3p0) erratum. An interrupted ICIALLUIS operation may prevent the
+ completion of a following broadcasted operation if the second
+ operation is received by a CPU before the ICIALLUIS has completed,
+ potentially leading to corrupted entries in the cache or TLB.
+
+config ARM_ERRATA_753970
+ bool "ARM errata: cache sync operation may be faulty"
+ depends on CACHE_PL310
+ help
+ This option enables the workaround for the 753970 PL310 (r3p0) erratum.
+
+ Under some condition the effect of cache sync operation on
+ the store buffer still remains when the operation completes.
+ This means that the store buffer is always asked to drain and
+ this prevents it from merging any further writes. The workaround
+ is to replace the normal offset of cache sync operation (0x730)
+ by another offset targeting an unmapped PL310 register 0x740.
+ This has the same effect as the cache sync operation: store buffer
+ drain and waiting for all buffers empty.
+
endmenu
source "arch/arm/common/Kconfig"
diff --git a/trunk/arch/arm/Makefile b/trunk/arch/arm/Makefile
index c22c1adfedd6..6f7b29294c80 100644
--- a/trunk/arch/arm/Makefile
+++ b/trunk/arch/arm/Makefile
@@ -15,7 +15,7 @@ ifeq ($(CONFIG_CPU_ENDIAN_BE8),y)
LDFLAGS_vmlinux += --be8
endif
-OBJCOPYFLAGS :=-O binary -R .note -R .note.gnu.build-id -R .comment -S
+OBJCOPYFLAGS :=-O binary -R .comment -S
GZFLAGS :=-9
#KBUILD_CFLAGS +=-pipe
# Explicitly specifiy 32-bit ARM ISA since toolchain default can be -mthumb:
diff --git a/trunk/arch/arm/boot/compressed/.gitignore b/trunk/arch/arm/boot/compressed/.gitignore
index ab204db594d3..c6028967d336 100644
--- a/trunk/arch/arm/boot/compressed/.gitignore
+++ b/trunk/arch/arm/boot/compressed/.gitignore
@@ -1,3 +1,7 @@
font.c
-piggy.gz
+lib1funcs.S
+piggy.gzip
+piggy.lzo
+piggy.lzma
+vmlinux
vmlinux.lds
diff --git a/trunk/arch/arm/common/Kconfig b/trunk/arch/arm/common/Kconfig
index 778655f0257a..ea5ee4d067f3 100644
--- a/trunk/arch/arm/common/Kconfig
+++ b/trunk/arch/arm/common/Kconfig
@@ -6,6 +6,8 @@ config ARM_VIC
config ARM_VIC_NR
int
+ default 4 if ARCH_S5PV210
+ default 3 if ARCH_S5P6442 || ARCH_S5PC100
default 2
depends on ARM_VIC
help
diff --git a/trunk/arch/arm/include/asm/futex.h b/trunk/arch/arm/include/asm/futex.h
index b33fe7065b38..199a6b6de7f4 100644
--- a/trunk/arch/arm/include/asm/futex.h
+++ b/trunk/arch/arm/include/asm/futex.h
@@ -35,7 +35,7 @@
: "cc", "memory")
static inline int
-futex_atomic_op_inuser (int encoded_op, int __user *uaddr)
+futex_atomic_op_inuser (int encoded_op, u32 __user *uaddr)
{
int op = (encoded_op >> 28) & 7;
int cmp = (encoded_op >> 24) & 15;
@@ -46,7 +46,7 @@ futex_atomic_op_inuser (int encoded_op, int __user *uaddr)
if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28))
oparg = 1 << oparg;
- if (!access_ok(VERIFY_WRITE, uaddr, sizeof(int)))
+ if (!access_ok(VERIFY_WRITE, uaddr, sizeof(u32)))
return -EFAULT;
pagefault_disable(); /* implies preempt_disable() */
@@ -88,36 +88,35 @@ futex_atomic_op_inuser (int encoded_op, int __user *uaddr)
}
static inline int
-futex_atomic_cmpxchg_inatomic(int __user *uaddr, int oldval, int newval)
+futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
+ u32 oldval, u32 newval)
{
- int val;
+ int ret = 0;
+ u32 val;
- if (!access_ok(VERIFY_WRITE, uaddr, sizeof(int)))
+ if (!access_ok(VERIFY_WRITE, uaddr, sizeof(u32)))
return -EFAULT;
- pagefault_disable(); /* implies preempt_disable() */
-
__asm__ __volatile__("@futex_atomic_cmpxchg_inatomic\n"
- "1: " T(ldr) " %0, [%3]\n"
- " teq %0, %1\n"
+ "1: " T(ldr) " %1, [%4]\n"
+ " teq %1, %2\n"
" it eq @ explicit IT needed for the 2b label\n"
- "2: " T(streq) " %2, [%3]\n"
+ "2: " T(streq) " %3, [%4]\n"
"3:\n"
" .pushsection __ex_table,\"a\"\n"
" .align 3\n"
" .long 1b, 4f, 2b, 4f\n"
" .popsection\n"
" .pushsection .fixup,\"ax\"\n"
- "4: mov %0, %4\n"
+ "4: mov %0, %5\n"
" b 3b\n"
" .popsection"
- : "=&r" (val)
+ : "+r" (ret), "=&r" (val)
: "r" (oldval), "r" (newval), "r" (uaddr), "Ir" (-EFAULT)
: "cc", "memory");
- pagefault_enable(); /* subsumes preempt_enable() */
-
- return val;
+ *uval = val;
+ return ret;
}
#endif /* !SMP */
diff --git a/trunk/arch/arm/include/asm/hardware/cache-l2x0.h b/trunk/arch/arm/include/asm/hardware/cache-l2x0.h
index 5aeec1e1735c..16bd48031583 100644
--- a/trunk/arch/arm/include/asm/hardware/cache-l2x0.h
+++ b/trunk/arch/arm/include/asm/hardware/cache-l2x0.h
@@ -36,6 +36,7 @@
#define L2X0_RAW_INTR_STAT 0x21C
#define L2X0_INTR_CLEAR 0x220
#define L2X0_CACHE_SYNC 0x730
+#define L2X0_DUMMY_REG 0x740
#define L2X0_INV_LINE_PA 0x770
#define L2X0_INV_WAY 0x77C
#define L2X0_CLEAN_LINE_PA 0x7B0
diff --git a/trunk/arch/arm/include/asm/hardware/sp810.h b/trunk/arch/arm/include/asm/hardware/sp810.h
index 721847dc68ab..e0d1c0cfa548 100644
--- a/trunk/arch/arm/include/asm/hardware/sp810.h
+++ b/trunk/arch/arm/include/asm/hardware/sp810.h
@@ -58,6 +58,9 @@
static inline void sysctl_soft_reset(void __iomem *base)
{
+ /* switch to slow mode */
+ writel(0x2, base + SCCTRL);
+
/* writing any value to SCSYSSTAT reg will reset system */
writel(0, base + SCSYSSTAT);
}
diff --git a/trunk/arch/arm/include/asm/mach/arch.h b/trunk/arch/arm/include/asm/mach/arch.h
index 3a0893a76a3b..bf13b814c1b8 100644
--- a/trunk/arch/arm/include/asm/mach/arch.h
+++ b/trunk/arch/arm/include/asm/mach/arch.h
@@ -15,10 +15,6 @@ struct meminfo;
struct sys_timer;
struct machine_desc {
- /*
- * Note! The first two elements are used
- * by assembler code in head.S, head-common.S
- */
unsigned int nr; /* architecture number */
const char *name; /* architecture name */
unsigned long boot_params; /* tagged list */
diff --git a/trunk/arch/arm/include/asm/pgalloc.h b/trunk/arch/arm/include/asm/pgalloc.h
index 9763be04f77e..22de005f159c 100644
--- a/trunk/arch/arm/include/asm/pgalloc.h
+++ b/trunk/arch/arm/include/asm/pgalloc.h
@@ -10,6 +10,8 @@
#ifndef _ASMARM_PGALLOC_H
#define _ASMARM_PGALLOC_H
+#include
+
#include
#include
#include
diff --git a/trunk/arch/arm/include/asm/tlb.h b/trunk/arch/arm/include/asm/tlb.h
index f41a6f57cd12..82dfe5d0c41e 100644
--- a/trunk/arch/arm/include/asm/tlb.h
+++ b/trunk/arch/arm/include/asm/tlb.h
@@ -18,16 +18,34 @@
#define __ASMARM_TLB_H
#include
-#include
#ifndef CONFIG_MMU
#include
+
+#define tlb_flush(tlb) ((void) tlb)
+
#include
#else /* !CONFIG_MMU */
+#include
#include
+#include
+
+/*
+ * We need to delay page freeing for SMP as other CPUs can access pages
+ * which have been removed but not yet had their TLB entries invalidated.
+ * Also, as ARMv7 speculative prefetch can drag new entries into the TLB,
+ * we need to apply this same delaying tactic to ensure correct operation.
+ */
+#if defined(CONFIG_SMP) || defined(CONFIG_CPU_32v7)
+#define tlb_fast_mode(tlb) 0
+#define FREE_PTE_NR 500
+#else
+#define tlb_fast_mode(tlb) 1
+#define FREE_PTE_NR 0
+#endif
/*
* TLB handling. This allows us to remove pages from the page
@@ -36,12 +54,58 @@
struct mmu_gather {
struct mm_struct *mm;
unsigned int fullmm;
+ struct vm_area_struct *vma;
unsigned long range_start;
unsigned long range_end;
+ unsigned int nr;
+ struct page *pages[FREE_PTE_NR];
};
DECLARE_PER_CPU(struct mmu_gather, mmu_gathers);
+/*
+ * This is unnecessarily complex. There's three ways the TLB shootdown
+ * code is used:
+ * 1. Unmapping a range of vmas. See zap_page_range(), unmap_region().
+ * tlb->fullmm = 0, and tlb_start_vma/tlb_end_vma will be called.
+ * tlb->vma will be non-NULL.
+ * 2. Unmapping all vmas. See exit_mmap().
+ * tlb->fullmm = 1, and tlb_start_vma/tlb_end_vma will be called.
+ * tlb->vma will be non-NULL. Additionally, page tables will be freed.
+ * 3. Unmapping argument pages. See shift_arg_pages().
+ * tlb->fullmm = 0, but tlb_start_vma/tlb_end_vma will not be called.
+ * tlb->vma will be NULL.
+ */
+static inline void tlb_flush(struct mmu_gather *tlb)
+{
+ if (tlb->fullmm || !tlb->vma)
+ flush_tlb_mm(tlb->mm);
+ else if (tlb->range_end > 0) {
+ flush_tlb_range(tlb->vma, tlb->range_start, tlb->range_end);
+ tlb->range_start = TASK_SIZE;
+ tlb->range_end = 0;
+ }
+}
+
+static inline void tlb_add_flush(struct mmu_gather *tlb, unsigned long addr)
+{
+ if (!tlb->fullmm) {
+ if (addr < tlb->range_start)
+ tlb->range_start = addr;
+ if (addr + PAGE_SIZE > tlb->range_end)
+ tlb->range_end = addr + PAGE_SIZE;
+ }
+}
+
+static inline void tlb_flush_mmu(struct mmu_gather *tlb)
+{
+ tlb_flush(tlb);
+ if (!tlb_fast_mode(tlb)) {
+ free_pages_and_swap_cache(tlb->pages, tlb->nr);
+ tlb->nr = 0;
+ }
+}
+
static inline struct mmu_gather *
tlb_gather_mmu(struct mm_struct *mm, unsigned int full_mm_flush)
{
@@ -49,6 +113,8 @@ tlb_gather_mmu(struct mm_struct *mm, unsigned int full_mm_flush)
tlb->mm = mm;
tlb->fullmm = full_mm_flush;
+ tlb->vma = NULL;
+ tlb->nr = 0;
return tlb;
}
@@ -56,8 +122,7 @@ tlb_gather_mmu(struct mm_struct *mm, unsigned int full_mm_flush)
static inline void
tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start, unsigned long end)
{
- if (tlb->fullmm)
- flush_tlb_mm(tlb->mm);
+ tlb_flush_mmu(tlb);
/* keep the page table cache within bounds */
check_pgt_cache();
@@ -71,12 +136,7 @@ tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start, unsigned long end)
static inline void
tlb_remove_tlb_entry(struct mmu_gather *tlb, pte_t *ptep, unsigned long addr)
{
- if (!tlb->fullmm) {
- if (addr < tlb->range_start)
- tlb->range_start = addr;
- if (addr + PAGE_SIZE > tlb->range_end)
- tlb->range_end = addr + PAGE_SIZE;
- }
+ tlb_add_flush(tlb, addr);
}
/*
@@ -89,6 +149,7 @@ tlb_start_vma(struct mmu_gather *tlb, struct vm_area_struct *vma)
{
if (!tlb->fullmm) {
flush_cache_range(vma, vma->vm_start, vma->vm_end);
+ tlb->vma = vma;
tlb->range_start = TASK_SIZE;
tlb->range_end = 0;
}
@@ -97,12 +158,30 @@ tlb_start_vma(struct mmu_gather *tlb, struct vm_area_struct *vma)
static inline void
tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma)
{
- if (!tlb->fullmm && tlb->range_end > 0)
- flush_tlb_range(vma, tlb->range_start, tlb->range_end);
+ if (!tlb->fullmm)
+ tlb_flush(tlb);
+}
+
+static inline void tlb_remove_page(struct mmu_gather *tlb, struct page *page)
+{
+ if (tlb_fast_mode(tlb)) {
+ free_page_and_swap_cache(page);
+ } else {
+ tlb->pages[tlb->nr++] = page;
+ if (tlb->nr >= FREE_PTE_NR)
+ tlb_flush_mmu(tlb);
+ }
+}
+
+static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte,
+ unsigned long addr)
+{
+ pgtable_page_dtor(pte);
+ tlb_add_flush(tlb, addr);
+ tlb_remove_page(tlb, pte);
}
-#define tlb_remove_page(tlb,page) free_page_and_swap_cache(page)
-#define pte_free_tlb(tlb, ptep, addr) pte_free((tlb)->mm, ptep)
+#define pte_free_tlb(tlb, ptep, addr) __pte_free_tlb(tlb, ptep, addr)
#define pmd_free_tlb(tlb, pmdp, addr) pmd_free((tlb)->mm, pmdp)
#define tlb_migrate_finish(mm) do { } while (0)
diff --git a/trunk/arch/arm/include/asm/tlbflush.h b/trunk/arch/arm/include/asm/tlbflush.h
index ce7378ea15a2..d2005de383b8 100644
--- a/trunk/arch/arm/include/asm/tlbflush.h
+++ b/trunk/arch/arm/include/asm/tlbflush.h
@@ -10,12 +10,7 @@
#ifndef _ASMARM_TLBFLUSH_H
#define _ASMARM_TLBFLUSH_H
-
-#ifndef CONFIG_MMU
-
-#define tlb_flush(tlb) ((void) tlb)
-
-#else /* CONFIG_MMU */
+#ifdef CONFIG_MMU
#include
diff --git a/trunk/arch/arm/kernel/hw_breakpoint.c b/trunk/arch/arm/kernel/hw_breakpoint.c
index d600bd350704..44b84fe6e1b0 100644
--- a/trunk/arch/arm/kernel/hw_breakpoint.c
+++ b/trunk/arch/arm/kernel/hw_breakpoint.c
@@ -836,9 +836,11 @@ static int hw_breakpoint_pending(unsigned long addr, unsigned int fsr,
/*
* One-time initialisation.
*/
-static void reset_ctrl_regs(void *unused)
+static void reset_ctrl_regs(void *info)
{
- int i;
+ int i, cpu = smp_processor_id();
+ u32 dbg_power;
+ cpumask_t *cpumask = info;
/*
* v7 debug contains save and restore registers so that debug state
@@ -849,6 +851,17 @@ static void reset_ctrl_regs(void *unused)
* later on.
*/
if (debug_arch >= ARM_DEBUG_ARCH_V7_ECP14) {
+ /*
+ * Ensure sticky power-down is clear (i.e. debug logic is
+ * powered up).
+ */
+ asm volatile("mrc p14, 0, %0, c1, c5, 4" : "=r" (dbg_power));
+ if ((dbg_power & 0x1) == 0) {
+ pr_warning("CPU %d debug is powered down!\n", cpu);
+ cpumask_or(cpumask, cpumask, cpumask_of(cpu));
+ return;
+ }
+
/*
* Unconditionally clear the lock by writing a value
* other than 0xC5ACCE55 to the access register.
@@ -887,6 +900,7 @@ static struct notifier_block __cpuinitdata dbg_reset_nb = {
static int __init arch_hw_breakpoint_init(void)
{
u32 dscr;
+ cpumask_t cpumask = { CPU_BITS_NONE };
debug_arch = get_debug_arch();
@@ -911,7 +925,13 @@ static int __init arch_hw_breakpoint_init(void)
* Reset the breakpoint resources. We assume that a halting
* debugger will leave the world in a nice state for us.
*/
- on_each_cpu(reset_ctrl_regs, NULL, 1);
+ on_each_cpu(reset_ctrl_regs, &cpumask, 1);
+ if (!cpumask_empty(&cpumask)) {
+ core_num_brps = 0;
+ core_num_reserved_brps = 0;
+ core_num_wrps = 0;
+ return 0;
+ }
ARM_DBG_READ(c1, 0, dscr);
if (dscr & ARM_DSCR_HDBGEN) {
diff --git a/trunk/arch/arm/kernel/kprobes-decode.c b/trunk/arch/arm/kernel/kprobes-decode.c
index 2c1f0050c9c4..8f6ed43861f1 100644
--- a/trunk/arch/arm/kernel/kprobes-decode.c
+++ b/trunk/arch/arm/kernel/kprobes-decode.c
@@ -1437,7 +1437,7 @@ arm_kprobe_decode_insn(kprobe_opcode_t insn, struct arch_specific_insn *asi)
return space_cccc_1100_010x(insn, asi);
- } else if ((insn & 0x0e000000) == 0x0c400000) {
+ } else if ((insn & 0x0e000000) == 0x0c000000) {
return space_cccc_110x(insn, asi);
diff --git a/trunk/arch/arm/kernel/pmu.c b/trunk/arch/arm/kernel/pmu.c
index b8af96ea62e6..2c79eec19262 100644
--- a/trunk/arch/arm/kernel/pmu.c
+++ b/trunk/arch/arm/kernel/pmu.c
@@ -97,28 +97,34 @@ set_irq_affinity(int irq,
irq, cpu);
return err;
#else
- return 0;
+ return -EINVAL;
#endif
}
static int
init_cpu_pmu(void)
{
- int i, err = 0;
+ int i, irqs, err = 0;
struct platform_device *pdev = pmu_devices[ARM_PMU_DEVICE_CPU];
- if (!pdev) {
- err = -ENODEV;
- goto out;
- }
+ if (!pdev)
+ return -ENODEV;
+
+ irqs = pdev->num_resources;
+
+ /*
+ * If we have a single PMU interrupt that we can't shift, assume that
+ * we're running on a uniprocessor machine and continue.
+ */
+ if (irqs == 1 && !irq_can_set_affinity(platform_get_irq(pdev, 0)))
+ return 0;
- for (i = 0; i < pdev->num_resources; ++i) {
+ for (i = 0; i < irqs; ++i) {
err = set_irq_affinity(platform_get_irq(pdev, i), i);
if (err)
break;
}
-out:
return err;
}
diff --git a/trunk/arch/arm/kernel/ptrace.c b/trunk/arch/arm/kernel/ptrace.c
index 19c6816db61e..b13e70f63d71 100644
--- a/trunk/arch/arm/kernel/ptrace.c
+++ b/trunk/arch/arm/kernel/ptrace.c
@@ -996,10 +996,10 @@ static int ptrace_gethbpregs(struct task_struct *tsk, long num,
while (!(arch_ctrl.len & 0x1))
arch_ctrl.len >>= 1;
- if (idx & 0x1)
- reg = encode_ctrl_reg(arch_ctrl);
- else
+ if (num & 0x1)
reg = bp->attr.bp_addr;
+ else
+ reg = encode_ctrl_reg(arch_ctrl);
}
put:
diff --git a/trunk/arch/arm/kernel/setup.c b/trunk/arch/arm/kernel/setup.c
index 420b8d6485d6..5ea4fb718b97 100644
--- a/trunk/arch/arm/kernel/setup.c
+++ b/trunk/arch/arm/kernel/setup.c
@@ -226,8 +226,8 @@ int cpu_architecture(void)
* Register 0 and check for VMSAv7 or PMSAv7 */
asm("mrc p15, 0, %0, c0, c1, 4"
: "=r" (mmfr0));
- if ((mmfr0 & 0x0000000f) == 0x00000003 ||
- (mmfr0 & 0x000000f0) == 0x00000030)
+ if ((mmfr0 & 0x0000000f) >= 0x00000003 ||
+ (mmfr0 & 0x000000f0) >= 0x00000030)
cpu_arch = CPU_ARCH_ARMv7;
else if ((mmfr0 & 0x0000000f) == 0x00000002 ||
(mmfr0 & 0x000000f0) == 0x00000020)
diff --git a/trunk/arch/arm/kernel/signal.c b/trunk/arch/arm/kernel/signal.c
index 907d5a620bca..abaf8445ce25 100644
--- a/trunk/arch/arm/kernel/signal.c
+++ b/trunk/arch/arm/kernel/signal.c
@@ -474,7 +474,9 @@ setup_return(struct pt_regs *regs, struct k_sigaction *ka,
unsigned long handler = (unsigned long)ka->sa.sa_handler;
unsigned long retcode;
int thumb = 0;
- unsigned long cpsr = regs->ARM_cpsr & ~PSR_f;
+ unsigned long cpsr = regs->ARM_cpsr & ~(PSR_f | PSR_E_BIT);
+
+ cpsr |= PSR_ENDSTATE;
/*
* Maybe we need to deliver a 32-bit signal to a 26-bit task.
diff --git a/trunk/arch/arm/kernel/time.c b/trunk/arch/arm/kernel/time.c
index 3d76bf233734..1ff46cabc7ef 100644
--- a/trunk/arch/arm/kernel/time.c
+++ b/trunk/arch/arm/kernel/time.c
@@ -107,9 +107,7 @@ void timer_tick(void)
{
profile_tick(CPU_PROFILING);
do_leds();
- write_seqlock(&xtime_lock);
- do_timer(1);
- write_sequnlock(&xtime_lock);
+ xtime_update(1);
#ifndef CONFIG_SMP
update_process_times(user_mode(get_irq_regs()));
#endif
diff --git a/trunk/arch/arm/kernel/vmlinux.lds.S b/trunk/arch/arm/kernel/vmlinux.lds.S
index 86b66f3f2031..28fea9b2d129 100644
--- a/trunk/arch/arm/kernel/vmlinux.lds.S
+++ b/trunk/arch/arm/kernel/vmlinux.lds.S
@@ -21,6 +21,12 @@
#define ARM_CPU_KEEP(x)
#endif
+#if defined(CONFIG_SMP_ON_UP) && !defined(CONFIG_DEBUG_SPINLOCK)
+#define ARM_EXIT_KEEP(x) x
+#else
+#define ARM_EXIT_KEEP(x)
+#endif
+
OUTPUT_ARCH(arm)
ENTRY(stext)
@@ -43,6 +49,7 @@ SECTIONS
_sinittext = .;
HEAD_TEXT
INIT_TEXT
+ ARM_EXIT_KEEP(EXIT_TEXT)
_einittext = .;
ARM_CPU_DISCARD(PROC_INFO)
__arch_info_begin = .;
@@ -67,10 +74,11 @@ SECTIONS
#ifndef CONFIG_XIP_KERNEL
__init_begin = _stext;
INIT_DATA
+ ARM_EXIT_KEEP(EXIT_DATA)
#endif
}
- PERCPU(PAGE_SIZE)
+ PERCPU(32, PAGE_SIZE)
#ifndef CONFIG_XIP_KERNEL
. = ALIGN(PAGE_SIZE);
@@ -162,6 +170,7 @@ SECTIONS
. = ALIGN(PAGE_SIZE);
__init_begin = .;
INIT_DATA
+ ARM_EXIT_KEEP(EXIT_DATA)
. = ALIGN(PAGE_SIZE);
__init_end = .;
#endif
@@ -247,6 +256,8 @@ SECTIONS
}
#endif
+ NOTES
+
BSS_SECTION(0, 0, 0)
_end = .;
diff --git a/trunk/arch/arm/mach-clps711x/include/mach/time.h b/trunk/arch/arm/mach-clps711x/include/mach/time.h
index 8fe283ccd1f3..61fef9129c6a 100644
--- a/trunk/arch/arm/mach-clps711x/include/mach/time.h
+++ b/trunk/arch/arm/mach-clps711x/include/mach/time.h
@@ -30,7 +30,7 @@ p720t_timer_interrupt(int irq, void *dev_id)
{
struct pt_regs *regs = get_irq_regs();
do_leds();
- do_timer(1);
+ xtime_update(1);
#ifndef CONFIG_SMP
update_process_times(user_mode(regs));
#endif
diff --git a/trunk/arch/arm/mach-davinci/cpufreq.c b/trunk/arch/arm/mach-davinci/cpufreq.c
index 343de73161fa..4a68c2b1ec11 100644
--- a/trunk/arch/arm/mach-davinci/cpufreq.c
+++ b/trunk/arch/arm/mach-davinci/cpufreq.c
@@ -132,7 +132,7 @@ static int davinci_target(struct cpufreq_policy *policy,
return ret;
}
-static int __init davinci_cpu_init(struct cpufreq_policy *policy)
+static int davinci_cpu_init(struct cpufreq_policy *policy)
{
int result = 0;
struct davinci_cpufreq_config *pdata = cpufreq.dev->platform_data;
diff --git a/trunk/arch/arm/mach-davinci/devices-da8xx.c b/trunk/arch/arm/mach-davinci/devices-da8xx.c
index 9eec63070e0c..beda8a4133a0 100644
--- a/trunk/arch/arm/mach-davinci/devices-da8xx.c
+++ b/trunk/arch/arm/mach-davinci/devices-da8xx.c
@@ -480,8 +480,15 @@ static struct platform_device da850_mcasp_device = {
.resource = da850_mcasp_resources,
};
+struct platform_device davinci_pcm_device = {
+ .name = "davinci-pcm-audio",
+ .id = -1,
+};
+
void __init da8xx_register_mcasp(int id, struct snd_platform_data *pdata)
{
+ platform_device_register(&davinci_pcm_device);
+
/* DA830/OMAP-L137 has 3 instances of McASP */
if (cpu_is_davinci_da830() && id == 1) {
da830_mcasp1_device.dev.platform_data = pdata;
diff --git a/trunk/arch/arm/mach-davinci/gpio-tnetv107x.c b/trunk/arch/arm/mach-davinci/gpio-tnetv107x.c
index d10298620e2c..3fa3e2867e19 100644
--- a/trunk/arch/arm/mach-davinci/gpio-tnetv107x.c
+++ b/trunk/arch/arm/mach-davinci/gpio-tnetv107x.c
@@ -58,7 +58,7 @@ static int tnetv107x_gpio_request(struct gpio_chip *chip, unsigned offset)
spin_lock_irqsave(&ctlr->lock, flags);
- gpio_reg_set_bit(®s->enable, gpio);
+ gpio_reg_set_bit(regs->enable, gpio);
spin_unlock_irqrestore(&ctlr->lock, flags);
@@ -74,7 +74,7 @@ static void tnetv107x_gpio_free(struct gpio_chip *chip, unsigned offset)
spin_lock_irqsave(&ctlr->lock, flags);
- gpio_reg_clear_bit(®s->enable, gpio);
+ gpio_reg_clear_bit(regs->enable, gpio);
spin_unlock_irqrestore(&ctlr->lock, flags);
}
@@ -88,7 +88,7 @@ static int tnetv107x_gpio_dir_in(struct gpio_chip *chip, unsigned offset)
spin_lock_irqsave(&ctlr->lock, flags);
- gpio_reg_set_bit(®s->direction, gpio);
+ gpio_reg_set_bit(regs->direction, gpio);
spin_unlock_irqrestore(&ctlr->lock, flags);
@@ -106,11 +106,11 @@ static int tnetv107x_gpio_dir_out(struct gpio_chip *chip,
spin_lock_irqsave(&ctlr->lock, flags);
if (value)
- gpio_reg_set_bit(®s->data_out, gpio);
+ gpio_reg_set_bit(regs->data_out, gpio);
else
- gpio_reg_clear_bit(®s->data_out, gpio);
+ gpio_reg_clear_bit(regs->data_out, gpio);
- gpio_reg_clear_bit(®s->direction, gpio);
+ gpio_reg_clear_bit(regs->direction, gpio);
spin_unlock_irqrestore(&ctlr->lock, flags);
@@ -124,7 +124,7 @@ static int tnetv107x_gpio_get(struct gpio_chip *chip, unsigned offset)
unsigned gpio = chip->base + offset;
int ret;
- ret = gpio_reg_get_bit(®s->data_in, gpio);
+ ret = gpio_reg_get_bit(regs->data_in, gpio);
return ret ? 1 : 0;
}
@@ -140,9 +140,9 @@ static void tnetv107x_gpio_set(struct gpio_chip *chip,
spin_lock_irqsave(&ctlr->lock, flags);
if (value)
- gpio_reg_set_bit(®s->data_out, gpio);
+ gpio_reg_set_bit(regs->data_out, gpio);
else
- gpio_reg_clear_bit(®s->data_out, gpio);
+ gpio_reg_clear_bit(regs->data_out, gpio);
spin_unlock_irqrestore(&ctlr->lock, flags);
}
diff --git a/trunk/arch/arm/mach-davinci/include/mach/clkdev.h b/trunk/arch/arm/mach-davinci/include/mach/clkdev.h
index 730c49d1ebd8..14a504887189 100644
--- a/trunk/arch/arm/mach-davinci/include/mach/clkdev.h
+++ b/trunk/arch/arm/mach-davinci/include/mach/clkdev.h
@@ -1,6 +1,8 @@
#ifndef __MACH_CLKDEV_H
#define __MACH_CLKDEV_H
+struct clk;
+
static inline int __clk_get(struct clk *clk)
{
return 1;
diff --git a/trunk/arch/arm/mach-omap2/clkt_dpll.c b/trunk/arch/arm/mach-omap2/clkt_dpll.c
index 337392c3f549..acb7ae5b0a25 100644
--- a/trunk/arch/arm/mach-omap2/clkt_dpll.c
+++ b/trunk/arch/arm/mach-omap2/clkt_dpll.c
@@ -77,7 +77,7 @@ static int _dpll_test_fint(struct clk *clk, u8 n)
dd = clk->dpll_data;
/* DPLL divider must result in a valid jitter correction val */
- fint = clk->parent->rate / (n + 1);
+ fint = clk->parent->rate / n;
if (fint < DPLL_FINT_BAND1_MIN) {
pr_debug("rejecting n=%d due to Fint failure, "
diff --git a/trunk/arch/arm/mach-omap2/mailbox.c b/trunk/arch/arm/mach-omap2/mailbox.c
index 394413dc7deb..24b88504df0f 100644
--- a/trunk/arch/arm/mach-omap2/mailbox.c
+++ b/trunk/arch/arm/mach-omap2/mailbox.c
@@ -193,10 +193,12 @@ static void omap2_mbox_disable_irq(struct omap_mbox *mbox,
omap_mbox_type_t irq)
{
struct omap_mbox2_priv *p = mbox->priv;
- u32 l, bit = (irq == IRQ_TX) ? p->notfull_bit : p->newmsg_bit;
- l = mbox_read_reg(p->irqdisable);
- l &= ~bit;
- mbox_write_reg(l, p->irqdisable);
+ u32 bit = (irq == IRQ_TX) ? p->notfull_bit : p->newmsg_bit;
+
+ if (!cpu_is_omap44xx())
+ bit = mbox_read_reg(p->irqdisable) & ~bit;
+
+ mbox_write_reg(bit, p->irqdisable);
}
static void omap2_mbox_ack_irq(struct omap_mbox *mbox,
@@ -334,7 +336,7 @@ static struct omap_mbox mbox_iva_info = {
.priv = &omap2_mbox_iva_priv,
};
-struct omap_mbox *omap2_mboxes[] = { &mbox_iva_info, &mbox_dsp_info, NULL };
+struct omap_mbox *omap2_mboxes[] = { &mbox_dsp_info, &mbox_iva_info, NULL };
#endif
#if defined(CONFIG_ARCH_OMAP4)
diff --git a/trunk/arch/arm/mach-omap2/mux.c b/trunk/arch/arm/mach-omap2/mux.c
index 98148b6c36e9..6c84659cf846 100644
--- a/trunk/arch/arm/mach-omap2/mux.c
+++ b/trunk/arch/arm/mach-omap2/mux.c
@@ -605,7 +605,7 @@ static void __init omap_mux_dbg_create_entry(
list_for_each_entry(e, &partition->muxmodes, node) {
struct omap_mux *m = &e->mux;
- (void)debugfs_create_file(m->muxnames[0], S_IWUGO, mux_dbg_dir,
+ (void)debugfs_create_file(m->muxnames[0], S_IWUSR, mux_dbg_dir,
m, &omap_mux_dbg_signal_fops);
}
}
diff --git a/trunk/arch/arm/mach-omap2/pm-debug.c b/trunk/arch/arm/mach-omap2/pm-debug.c
index 125f56591fb5..a5a83b358ddd 100644
--- a/trunk/arch/arm/mach-omap2/pm-debug.c
+++ b/trunk/arch/arm/mach-omap2/pm-debug.c
@@ -637,14 +637,14 @@ static int __init pm_dbg_init(void)
}
- (void) debugfs_create_file("enable_off_mode", S_IRUGO | S_IWUGO, d,
+ (void) debugfs_create_file("enable_off_mode", S_IRUGO | S_IWUSR, d,
&enable_off_mode, &pm_dbg_option_fops);
- (void) debugfs_create_file("sleep_while_idle", S_IRUGO | S_IWUGO, d,
+ (void) debugfs_create_file("sleep_while_idle", S_IRUGO | S_IWUSR, d,
&sleep_while_idle, &pm_dbg_option_fops);
- (void) debugfs_create_file("wakeup_timer_seconds", S_IRUGO | S_IWUGO, d,
+ (void) debugfs_create_file("wakeup_timer_seconds", S_IRUGO | S_IWUSR, d,
&wakeup_timer_seconds, &pm_dbg_option_fops);
(void) debugfs_create_file("wakeup_timer_milliseconds",
- S_IRUGO | S_IWUGO, d, &wakeup_timer_milliseconds,
+ S_IRUGO | S_IWUSR, d, &wakeup_timer_milliseconds,
&pm_dbg_option_fops);
pm_dbg_init_done = 1;
diff --git a/trunk/arch/arm/mach-omap2/prcm_mpu44xx.h b/trunk/arch/arm/mach-omap2/prcm_mpu44xx.h
index 729a644ce852..3300ff6e3cfe 100644
--- a/trunk/arch/arm/mach-omap2/prcm_mpu44xx.h
+++ b/trunk/arch/arm/mach-omap2/prcm_mpu44xx.h
@@ -38,8 +38,8 @@
#define OMAP4430_PRCM_MPU_CPU1_INST 0x0800
/* PRCM_MPU clockdomain register offsets (from instance start) */
-#define OMAP4430_PRCM_MPU_CPU0_MPU_CDOFFS 0x0000
-#define OMAP4430_PRCM_MPU_CPU1_MPU_CDOFFS 0x0000
+#define OMAP4430_PRCM_MPU_CPU0_MPU_CDOFFS 0x0018
+#define OMAP4430_PRCM_MPU_CPU1_MPU_CDOFFS 0x0018
/*
diff --git a/trunk/arch/arm/mach-omap2/smartreflex.c b/trunk/arch/arm/mach-omap2/smartreflex.c
index c37e823266d3..1a777e34d0c2 100644
--- a/trunk/arch/arm/mach-omap2/smartreflex.c
+++ b/trunk/arch/arm/mach-omap2/smartreflex.c
@@ -282,6 +282,7 @@ static int sr_late_init(struct omap_sr *sr_info)
dev_err(&sr_info->pdev->dev, "%s: ERROR in registering"
"interrupt handler. Smartreflex will"
"not function as desired\n", __func__);
+ kfree(name);
kfree(sr_info);
return ret;
}
@@ -879,7 +880,7 @@ static int __init omap_sr_probe(struct platform_device *pdev)
ret = sr_late_init(sr_info);
if (ret) {
pr_warning("%s: Error in SR late init\n", __func__);
- return ret;
+ goto err_release_region;
}
}
@@ -890,17 +891,20 @@ static int __init omap_sr_probe(struct platform_device *pdev)
* not try to create rest of the debugfs entries.
*/
vdd_dbg_dir = omap_voltage_get_dbgdir(sr_info->voltdm);
- if (!vdd_dbg_dir)
- return -EINVAL;
+ if (!vdd_dbg_dir) {
+ ret = -EINVAL;
+ goto err_release_region;
+ }
dbg_dir = debugfs_create_dir("smartreflex", vdd_dbg_dir);
if (IS_ERR(dbg_dir)) {
dev_err(&pdev->dev, "%s: Unable to create debugfs directory\n",
__func__);
- return PTR_ERR(dbg_dir);
+ ret = PTR_ERR(dbg_dir);
+ goto err_release_region;
}
- (void) debugfs_create_file("autocomp", S_IRUGO | S_IWUGO, dbg_dir,
+ (void) debugfs_create_file("autocomp", S_IRUGO | S_IWUSR, dbg_dir,
(void *)sr_info, &pm_sr_fops);
(void) debugfs_create_x32("errweight", S_IRUGO, dbg_dir,
&sr_info->err_weight);
@@ -913,7 +917,8 @@ static int __init omap_sr_probe(struct platform_device *pdev)
if (IS_ERR(nvalue_dir)) {
dev_err(&pdev->dev, "%s: Unable to create debugfs directory"
"for n-values\n", __func__);
- return PTR_ERR(nvalue_dir);
+ ret = PTR_ERR(nvalue_dir);
+ goto err_release_region;
}
omap_voltage_get_volttable(sr_info->voltdm, &volt_data);
@@ -922,24 +927,16 @@ static int __init omap_sr_probe(struct platform_device *pdev)
" corresponding vdd vdd_%s. Cannot create debugfs"
"entries for n-values\n",
__func__, sr_info->voltdm->name);
- return -ENODATA;
+ ret = -ENODATA;
+ goto err_release_region;
}
for (i = 0; i < sr_info->nvalue_count; i++) {
- char *name;
- char volt_name[32];
-
- name = kzalloc(NVALUE_NAME_LEN + 1, GFP_KERNEL);
- if (!name) {
- dev_err(&pdev->dev, "%s: Unable to allocate memory"
- " for n-value directory name\n", __func__);
- return -ENOMEM;
- }
+ char name[NVALUE_NAME_LEN + 1];
- strcpy(name, "volt_");
- sprintf(volt_name, "%d", volt_data[i].volt_nominal);
- strcat(name, volt_name);
- (void) debugfs_create_x32(name, S_IRUGO | S_IWUGO, nvalue_dir,
+ snprintf(name, sizeof(name), "volt_%d",
+ volt_data[i].volt_nominal);
+ (void) debugfs_create_x32(name, S_IRUGO | S_IWUSR, nvalue_dir,
&(sr_info->nvalue_table[i].nvalue));
}
diff --git a/trunk/arch/arm/mach-omap2/timer-gp.c b/trunk/arch/arm/mach-omap2/timer-gp.c
index 7b7c2683ae7b..0fc550e7e482 100644
--- a/trunk/arch/arm/mach-omap2/timer-gp.c
+++ b/trunk/arch/arm/mach-omap2/timer-gp.c
@@ -39,6 +39,7 @@
#include
#include
#include
+#include
#include "timer-gp.h"
@@ -190,6 +191,7 @@ static void __init omap2_gp_clocksource_init(void)
/*
* clocksource
*/
+static DEFINE_CLOCK_DATA(cd);
static struct omap_dm_timer *gpt_clocksource;
static cycle_t clocksource_read_cycles(struct clocksource *cs)
{
@@ -204,6 +206,15 @@ static struct clocksource clocksource_gpt = {
.flags = CLOCK_SOURCE_IS_CONTINUOUS,
};
+static void notrace dmtimer_update_sched_clock(void)
+{
+ u32 cyc;
+
+ cyc = omap_dm_timer_read_counter(gpt_clocksource);
+
+ update_sched_clock(&cd, cyc, (u32)~0);
+}
+
/* Setup free-running counter for clocksource */
static void __init omap2_gp_clocksource_init(void)
{
@@ -224,6 +235,8 @@ static void __init omap2_gp_clocksource_init(void)
omap_dm_timer_set_load_start(gpt, 1, 0);
+ init_sched_clock(&cd, dmtimer_update_sched_clock, 32, tick_rate);
+
if (clocksource_register_hz(&clocksource_gpt, tick_rate))
printk(err2, clocksource_gpt.name);
}
diff --git a/trunk/arch/arm/mach-pxa/pxa25x.c b/trunk/arch/arm/mach-pxa/pxa25x.c
index fbc5b775f895..b166b1d845d7 100644
--- a/trunk/arch/arm/mach-pxa/pxa25x.c
+++ b/trunk/arch/arm/mach-pxa/pxa25x.c
@@ -347,6 +347,7 @@ static struct platform_device *pxa25x_devices[] __initdata = {
&pxa25x_device_assp,
&pxa25x_device_pwm0,
&pxa25x_device_pwm1,
+ &pxa_device_asoc_platform,
};
static struct sys_device pxa25x_sysdev[] = {
diff --git a/trunk/arch/arm/mach-pxa/tosa-bt.c b/trunk/arch/arm/mach-pxa/tosa-bt.c
index c31e601eb49c..b9b1e5c2b290 100644
--- a/trunk/arch/arm/mach-pxa/tosa-bt.c
+++ b/trunk/arch/arm/mach-pxa/tosa-bt.c
@@ -81,8 +81,6 @@ static int tosa_bt_probe(struct platform_device *dev)
goto err_rfk_alloc;
}
- rfkill_set_led_trigger_name(rfk, "tosa-bt");
-
rc = rfkill_register(rfk);
if (rc)
goto err_rfkill;
diff --git a/trunk/arch/arm/mach-pxa/tosa.c b/trunk/arch/arm/mach-pxa/tosa.c
index af152e70cfcf..f2582ec300d9 100644
--- a/trunk/arch/arm/mach-pxa/tosa.c
+++ b/trunk/arch/arm/mach-pxa/tosa.c
@@ -875,6 +875,11 @@ static struct platform_device sharpsl_rom_device = {
.dev.platform_data = &sharpsl_rom_data,
};
+static struct platform_device wm9712_device = {
+ .name = "wm9712-codec",
+ .id = -1,
+};
+
static struct platform_device *devices[] __initdata = {
&tosascoop_device,
&tosascoop_jc_device,
@@ -885,6 +890,7 @@ static struct platform_device *devices[] __initdata = {
&tosaled_device,
&tosa_bt_device,
&sharpsl_rom_device,
+ &wm9712_device,
};
static void tosa_poweroff(void)
diff --git a/trunk/arch/arm/mach-s3c2440/Kconfig b/trunk/arch/arm/mach-s3c2440/Kconfig
index a0cb2581894f..50825a3f91cc 100644
--- a/trunk/arch/arm/mach-s3c2440/Kconfig
+++ b/trunk/arch/arm/mach-s3c2440/Kconfig
@@ -99,6 +99,7 @@ config MACH_NEO1973_GTA02
select POWER_SUPPLY
select MACH_NEO1973
select S3C2410_PWM
+ select S3C_DEV_USB_HOST
help
Say Y here if you are using the Openmoko GTA02 / Freerunner GSM Phone
diff --git a/trunk/arch/arm/mach-s3c2440/include/mach/gta02.h b/trunk/arch/arm/mach-s3c2440/include/mach/gta02.h
index 953331d8d56a..3a56a229cac6 100644
--- a/trunk/arch/arm/mach-s3c2440/include/mach/gta02.h
+++ b/trunk/arch/arm/mach-s3c2440/include/mach/gta02.h
@@ -44,19 +44,19 @@
#define GTA02v3_GPIO_nUSB_FLT S3C2410_GPG(10) /* v3 + v4 only */
#define GTA02v3_GPIO_nGSM_OC S3C2410_GPG(11) /* v3 + v4 only */
-#define GTA02_GPIO_AMP_SHUT S3C2440_GPJ1 /* v2 + v3 + v4 only */
-#define GTA02v1_GPIO_WLAN_GPIO10 S3C2440_GPJ2
-#define GTA02_GPIO_HP_IN S3C2440_GPJ2 /* v2 + v3 + v4 only */
-#define GTA02_GPIO_INT0 S3C2440_GPJ3 /* v2 + v3 + v4 only */
-#define GTA02_GPIO_nGSM_EN S3C2440_GPJ4
-#define GTA02_GPIO_3D_RESET S3C2440_GPJ5
-#define GTA02_GPIO_nDL_GSM S3C2440_GPJ6 /* v4 + v5 only */
-#define GTA02_GPIO_WLAN_GPIO0 S3C2440_GPJ7
-#define GTA02v1_GPIO_BAT_ID S3C2440_GPJ8
-#define GTA02_GPIO_KEEPACT S3C2440_GPJ8
-#define GTA02v1_GPIO_HP_IN S3C2440_GPJ10
-#define GTA02_CHIP_PWD S3C2440_GPJ11 /* v2 + v3 + v4 only */
-#define GTA02_GPIO_nWLAN_RESET S3C2440_GPJ12 /* v2 + v3 + v4 only */
+#define GTA02_GPIO_AMP_SHUT S3C2410_GPJ(1) /* v2 + v3 + v4 only */
+#define GTA02v1_GPIO_WLAN_GPIO10 S3C2410_GPJ(2)
+#define GTA02_GPIO_HP_IN S3C2410_GPJ(2) /* v2 + v3 + v4 only */
+#define GTA02_GPIO_INT0 S3C2410_GPJ(3) /* v2 + v3 + v4 only */
+#define GTA02_GPIO_nGSM_EN S3C2410_GPJ(4)
+#define GTA02_GPIO_3D_RESET S3C2410_GPJ(5)
+#define GTA02_GPIO_nDL_GSM S3C2410_GPJ(6) /* v4 + v5 only */
+#define GTA02_GPIO_WLAN_GPIO0 S3C2410_GPJ(7)
+#define GTA02v1_GPIO_BAT_ID S3C2410_GPJ(8)
+#define GTA02_GPIO_KEEPACT S3C2410_GPJ(8)
+#define GTA02v1_GPIO_HP_IN S3C2410_GPJ(10)
+#define GTA02_CHIP_PWD S3C2410_GPJ(11) /* v2 + v3 + v4 only */
+#define GTA02_GPIO_nWLAN_RESET S3C2410_GPJ(12) /* v2 + v3 + v4 only */
#define GTA02_IRQ_GSENSOR_1 IRQ_EINT0
#define GTA02_IRQ_MODEM IRQ_EINT1
diff --git a/trunk/arch/arm/mach-s3c64xx/clock.c b/trunk/arch/arm/mach-s3c64xx/clock.c
index dd3782064508..fdfc4d5e37a1 100644
--- a/trunk/arch/arm/mach-s3c64xx/clock.c
+++ b/trunk/arch/arm/mach-s3c64xx/clock.c
@@ -150,6 +150,12 @@ static struct clk init_clocks_off[] = {
.parent = &clk_p,
.enable = s3c64xx_pclk_ctrl,
.ctrlbit = S3C_CLKCON_PCLK_IIC,
+ }, {
+ .name = "i2c",
+ .id = 1,
+ .parent = &clk_p,
+ .enable = s3c64xx_pclk_ctrl,
+ .ctrlbit = S3C6410_CLKCON_PCLK_I2C1,
}, {
.name = "iis",
.id = 0,
diff --git a/trunk/arch/arm/mach-s3c64xx/dma.c b/trunk/arch/arm/mach-s3c64xx/dma.c
index 135db1b41252..c35585cf8c4f 100644
--- a/trunk/arch/arm/mach-s3c64xx/dma.c
+++ b/trunk/arch/arm/mach-s3c64xx/dma.c
@@ -690,12 +690,12 @@ static int s3c64xx_dma_init1(int chno, enum dma_ch chbase,
regptr = regs + PL080_Cx_BASE(0);
- for (ch = 0; ch < 8; ch++, chno++, chptr++) {
- printk(KERN_INFO "%s: registering DMA %d (%p)\n",
- __func__, chno, regptr);
+ for (ch = 0; ch < 8; ch++, chptr++) {
+ pr_debug("%s: registering DMA %d (%p)\n",
+ __func__, chno + ch, regptr);
chptr->bit = 1 << ch;
- chptr->number = chno;
+ chptr->number = chno + ch;
chptr->dmac = dmac;
chptr->regs = regptr;
regptr += PL080_Cx_STRIDE;
@@ -704,7 +704,8 @@ static int s3c64xx_dma_init1(int chno, enum dma_ch chbase,
/* for the moment, permanently enable the controller */
writel(PL080_CONFIG_ENABLE, regs + PL080_CONFIG);
- printk(KERN_INFO "PL080: IRQ %d, at %p\n", irq, regs);
+ printk(KERN_INFO "PL080: IRQ %d, at %p, channels %d..%d\n",
+ irq, regs, chno, chno+8);
return 0;
diff --git a/trunk/arch/arm/mach-s3c64xx/gpiolib.c b/trunk/arch/arm/mach-s3c64xx/gpiolib.c
index fd99a82e82c4..92b09085caaa 100644
--- a/trunk/arch/arm/mach-s3c64xx/gpiolib.c
+++ b/trunk/arch/arm/mach-s3c64xx/gpiolib.c
@@ -72,7 +72,7 @@ static struct s3c_gpio_cfg gpio_4bit_cfg_eint0011 = {
.get_pull = s3c_gpio_getpull_updown,
};
-int s3c64xx_gpio2int_gpm(struct gpio_chip *chip, unsigned pin)
+static int s3c64xx_gpio2int_gpm(struct gpio_chip *chip, unsigned pin)
{
return pin < 5 ? IRQ_EINT(23) + pin : -ENXIO;
}
@@ -138,7 +138,7 @@ static struct s3c_gpio_chip gpio_4bit[] = {
},
};
-int s3c64xx_gpio2int_gpl(struct gpio_chip *chip, unsigned pin)
+static int s3c64xx_gpio2int_gpl(struct gpio_chip *chip, unsigned pin)
{
return pin >= 8 ? IRQ_EINT(16) + pin - 8 : -ENXIO;
}
diff --git a/trunk/arch/arm/mach-s3c64xx/mach-smdk6410.c b/trunk/arch/arm/mach-s3c64xx/mach-smdk6410.c
index e85192a86fbe..a80a3163dd30 100644
--- a/trunk/arch/arm/mach-s3c64xx/mach-smdk6410.c
+++ b/trunk/arch/arm/mach-s3c64xx/mach-smdk6410.c
@@ -28,6 +28,7 @@
#include
#include
#include
+#include
#ifdef CONFIG_SMDK6410_WM1190_EV1
#include
@@ -351,7 +352,7 @@ static struct regulator_init_data smdk6410_vddpll = {
/* VDD_UH_MMC, LDO5 on J5 */
static struct regulator_init_data smdk6410_vdduh_mmc = {
.constraints = {
- .name = "PVDD_UH/PVDD_MMC",
+ .name = "PVDD_UH+PVDD_MMC",
.always_on = 1,
},
};
@@ -417,7 +418,7 @@ static struct regulator_init_data smdk6410_vddaudio = {
/* S3C64xx internal logic & PLL */
static struct regulator_init_data wm8350_dcdc1_data = {
.constraints = {
- .name = "PVDD_INT/PVDD_PLL",
+ .name = "PVDD_INT+PVDD_PLL",
.min_uV = 1200000,
.max_uV = 1200000,
.always_on = 1,
@@ -452,7 +453,7 @@ static struct regulator_consumer_supply wm8350_dcdc4_consumers[] = {
static struct regulator_init_data wm8350_dcdc4_data = {
.constraints = {
- .name = "PVDD_HI/PVDD_EXT/PVDD_SYS/PVCCM2MTV",
+ .name = "PVDD_HI+PVDD_EXT+PVDD_SYS+PVCCM2MTV",
.min_uV = 3000000,
.max_uV = 3000000,
.always_on = 1,
@@ -464,7 +465,7 @@ static struct regulator_init_data wm8350_dcdc4_data = {
/* OTGi/1190-EV1 HPVDD & AVDD */
static struct regulator_init_data wm8350_ldo4_data = {
.constraints = {
- .name = "PVDD_OTGI/HPVDD/AVDD",
+ .name = "PVDD_OTGI+HPVDD+AVDD",
.min_uV = 1200000,
.max_uV = 1200000,
.apply_uV = 1,
@@ -552,7 +553,7 @@ static struct wm831x_backlight_pdata wm1192_backlight_pdata = {
static struct regulator_init_data wm1192_dcdc3 = {
.constraints = {
- .name = "PVDD_MEM/PVDD_GPS",
+ .name = "PVDD_MEM+PVDD_GPS",
.always_on = 1,
},
};
@@ -563,7 +564,7 @@ static struct regulator_consumer_supply wm1192_ldo1_consumers[] = {
static struct regulator_init_data wm1192_ldo1 = {
.constraints = {
- .name = "PVDD_LCD/PVDD_EXT",
+ .name = "PVDD_LCD+PVDD_EXT",
.always_on = 1,
},
.consumer_supplies = wm1192_ldo1_consumers,
diff --git a/trunk/arch/arm/mach-s3c64xx/setup-keypad.c b/trunk/arch/arm/mach-s3c64xx/setup-keypad.c
index f8ed0d22db70..1d4d0ee9e870 100644
--- a/trunk/arch/arm/mach-s3c64xx/setup-keypad.c
+++ b/trunk/arch/arm/mach-s3c64xx/setup-keypad.c
@@ -17,7 +17,7 @@
void samsung_keypad_cfg_gpio(unsigned int rows, unsigned int cols)
{
/* Set all the necessary GPK pins to special-function 3: KP_ROW[x] */
- s3c_gpio_cfgrange_nopull(S3C64XX_GPK(8), 8 + rows, S3C_GPIO_SFN(3));
+ s3c_gpio_cfgrange_nopull(S3C64XX_GPK(8), rows, S3C_GPIO_SFN(3));
/* Set all the necessary GPL pins to special-function 3: KP_COL[x] */
s3c_gpio_cfgrange_nopull(S3C64XX_GPL(0), cols, S3C_GPIO_SFN(3));
diff --git a/trunk/arch/arm/mach-s3c64xx/setup-sdhci.c b/trunk/arch/arm/mach-s3c64xx/setup-sdhci.c
index 1a942037c4ef..f344a222bc84 100644
--- a/trunk/arch/arm/mach-s3c64xx/setup-sdhci.c
+++ b/trunk/arch/arm/mach-s3c64xx/setup-sdhci.c
@@ -56,7 +56,7 @@ void s3c6400_setup_sdhci_cfg_card(struct platform_device *dev,
else
ctrl3 = (S3C_SDHCI_CTRL3_FCSEL1 | S3C_SDHCI_CTRL3_FCSEL0);
- printk(KERN_INFO "%s: CTRL 2=%08x, 3=%08x\n", __func__, ctrl2, ctrl3);
+ pr_debug("%s: CTRL 2=%08x, 3=%08x\n", __func__, ctrl2, ctrl3);
writel(ctrl2, r + S3C_SDHCI_CONTROL2);
writel(ctrl3, r + S3C_SDHCI_CONTROL3);
}
diff --git a/trunk/arch/arm/mach-s5p6442/include/mach/map.h b/trunk/arch/arm/mach-s5p6442/include/mach/map.h
index 203dd5a18bd5..058dab4482a1 100644
--- a/trunk/arch/arm/mach-s5p6442/include/mach/map.h
+++ b/trunk/arch/arm/mach-s5p6442/include/mach/map.h
@@ -1,6 +1,6 @@
/* linux/arch/arm/mach-s5p6442/include/mach/map.h
*
- * Copyright (c) 2010 Samsung Electronics Co., Ltd.
+ * Copyright (c) 2010-2011 Samsung Electronics Co., Ltd.
* http://www.samsung.com/
*
* S5P6442 - Memory map definitions
@@ -16,56 +16,61 @@
#include
#include
-#define S5P6442_PA_CHIPID (0xE0000000)
-#define S5P_PA_CHIPID S5P6442_PA_CHIPID
+#define S5P6442_PA_SDRAM 0x20000000
-#define S5P6442_PA_SYSCON (0xE0100000)
-#define S5P_PA_SYSCON S5P6442_PA_SYSCON
+#define S5P6442_PA_I2S0 0xC0B00000
+#define S5P6442_PA_I2S1 0xF2200000
-#define S5P6442_PA_GPIO (0xE0200000)
+#define S5P6442_PA_CHIPID 0xE0000000
-#define S5P6442_PA_VIC0 (0xE4000000)
-#define S5P6442_PA_VIC1 (0xE4100000)
-#define S5P6442_PA_VIC2 (0xE4200000)
+#define S5P6442_PA_SYSCON 0xE0100000
-#define S5P6442_PA_SROMC (0xE7000000)
-#define S5P_PA_SROMC S5P6442_PA_SROMC
+#define S5P6442_PA_GPIO 0xE0200000
-#define S5P6442_PA_MDMA 0xE8000000
-#define S5P6442_PA_PDMA 0xE9000000
+#define S5P6442_PA_VIC0 0xE4000000
+#define S5P6442_PA_VIC1 0xE4100000
+#define S5P6442_PA_VIC2 0xE4200000
-#define S5P6442_PA_TIMER (0xEA000000)
-#define S5P_PA_TIMER S5P6442_PA_TIMER
+#define S5P6442_PA_SROMC 0xE7000000
-#define S5P6442_PA_SYSTIMER (0xEA100000)
+#define S5P6442_PA_MDMA 0xE8000000
+#define S5P6442_PA_PDMA 0xE9000000
-#define S5P6442_PA_WATCHDOG (0xEA200000)
+#define S5P6442_PA_TIMER 0xEA000000
-#define S5P6442_PA_UART (0xEC000000)
+#define S5P6442_PA_SYSTIMER 0xEA100000
-#define S5P_PA_UART0 (S5P6442_PA_UART + 0x0)
-#define S5P_PA_UART1 (S5P6442_PA_UART + 0x400)
-#define S5P_PA_UART2 (S5P6442_PA_UART + 0x800)
-#define S5P_SZ_UART SZ_256
+#define S5P6442_PA_WATCHDOG 0xEA200000
-#define S5P6442_PA_IIC0 (0xEC100000)
+#define S5P6442_PA_UART 0xEC000000
-#define S5P6442_PA_SDRAM (0x20000000)
-#define S5P_PA_SDRAM S5P6442_PA_SDRAM
+#define S5P6442_PA_IIC0 0xEC100000
#define S5P6442_PA_SPI 0xEC300000
-/* I2S */
-#define S5P6442_PA_I2S0 0xC0B00000
-#define S5P6442_PA_I2S1 0xF2200000
-
-/* PCM */
#define S5P6442_PA_PCM0 0xF2400000
#define S5P6442_PA_PCM1 0xF2500000
-/* compatibiltiy defines. */
+/* Compatibiltiy Defines */
+
+#define S3C_PA_IIC S5P6442_PA_IIC0
#define S3C_PA_WDT S5P6442_PA_WATCHDOG
+
+#define S5P_PA_CHIPID S5P6442_PA_CHIPID
+#define S5P_PA_SDRAM S5P6442_PA_SDRAM
+#define S5P_PA_SROMC S5P6442_PA_SROMC
+#define S5P_PA_SYSCON S5P6442_PA_SYSCON
+#define S5P_PA_TIMER S5P6442_PA_TIMER
+
+/* UART */
+
#define S3C_PA_UART S5P6442_PA_UART
-#define S3C_PA_IIC S5P6442_PA_IIC0
+
+#define S5P_PA_UART(x) (S3C_PA_UART + ((x) * S3C_UART_OFFSET))
+#define S5P_PA_UART0 S5P_PA_UART(0)
+#define S5P_PA_UART1 S5P_PA_UART(1)
+#define S5P_PA_UART2 S5P_PA_UART(2)
+
+#define S5P_SZ_UART SZ_256
#endif /* __ASM_ARCH_MAP_H */
diff --git a/trunk/arch/arm/mach-s5p64x0/include/mach/gpio.h b/trunk/arch/arm/mach-s5p64x0/include/mach/gpio.h
index 5486c8f01f1d..adb5f298ead8 100644
--- a/trunk/arch/arm/mach-s5p64x0/include/mach/gpio.h
+++ b/trunk/arch/arm/mach-s5p64x0/include/mach/gpio.h
@@ -23,7 +23,7 @@
#define S5P6440_GPIO_A_NR (6)
#define S5P6440_GPIO_B_NR (7)
#define S5P6440_GPIO_C_NR (8)
-#define S5P6440_GPIO_F_NR (2)
+#define S5P6440_GPIO_F_NR (16)
#define S5P6440_GPIO_G_NR (7)
#define S5P6440_GPIO_H_NR (10)
#define S5P6440_GPIO_I_NR (16)
@@ -36,7 +36,7 @@
#define S5P6450_GPIO_B_NR (7)
#define S5P6450_GPIO_C_NR (8)
#define S5P6450_GPIO_D_NR (8)
-#define S5P6450_GPIO_F_NR (2)
+#define S5P6450_GPIO_F_NR (16)
#define S5P6450_GPIO_G_NR (14)
#define S5P6450_GPIO_H_NR (10)
#define S5P6450_GPIO_I_NR (16)
diff --git a/trunk/arch/arm/mach-s5p64x0/include/mach/map.h b/trunk/arch/arm/mach-s5p64x0/include/mach/map.h
index a9365e5ba614..95c91257c7ca 100644
--- a/trunk/arch/arm/mach-s5p64x0/include/mach/map.h
+++ b/trunk/arch/arm/mach-s5p64x0/include/mach/map.h
@@ -1,6 +1,6 @@
/* linux/arch/arm/mach-s5p64x0/include/mach/map.h
*
- * Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
+ * Copyright (c) 2009-2011 Samsung Electronics Co., Ltd.
* http://www.samsung.com
*
* S5P64X0 - Memory map definitions
@@ -16,64 +16,46 @@
#include
#include
-#define S5P64X0_PA_SDRAM (0x20000000)
+#define S5P64X0_PA_SDRAM 0x20000000
-#define S5P64X0_PA_CHIPID (0xE0000000)
-#define S5P_PA_CHIPID S5P64X0_PA_CHIPID
-
-#define S5P64X0_PA_SYSCON (0xE0100000)
-#define S5P_PA_SYSCON S5P64X0_PA_SYSCON
-
-#define S5P64X0_PA_GPIO (0xE0308000)
-
-#define S5P64X0_PA_VIC0 (0xE4000000)
-#define S5P64X0_PA_VIC1 (0xE4100000)
+#define S5P64X0_PA_CHIPID 0xE0000000
-#define S5P64X0_PA_SROMC (0xE7000000)
-#define S5P_PA_SROMC S5P64X0_PA_SROMC
-
-#define S5P64X0_PA_PDMA (0xE9000000)
-
-#define S5P64X0_PA_TIMER (0xEA000000)
-#define S5P_PA_TIMER S5P64X0_PA_TIMER
+#define S5P64X0_PA_SYSCON 0xE0100000
-#define S5P64X0_PA_RTC (0xEA100000)
+#define S5P64X0_PA_GPIO 0xE0308000
-#define S5P64X0_PA_WDT (0xEA200000)
+#define S5P64X0_PA_VIC0 0xE4000000
+#define S5P64X0_PA_VIC1 0xE4100000
-#define S5P6440_PA_UART(x) (0xEC000000 + ((x) * S3C_UART_OFFSET))
-#define S5P6450_PA_UART(x) ((x < 5) ? (0xEC800000 + ((x) * S3C_UART_OFFSET)) : (0xEC000000))
+#define S5P64X0_PA_SROMC 0xE7000000
-#define S5P_PA_UART0 S5P6450_PA_UART(0)
-#define S5P_PA_UART1 S5P6450_PA_UART(1)
-#define S5P_PA_UART2 S5P6450_PA_UART(2)
-#define S5P_PA_UART3 S5P6450_PA_UART(3)
-#define S5P_PA_UART4 S5P6450_PA_UART(4)
-#define S5P_PA_UART5 S5P6450_PA_UART(5)
+#define S5P64X0_PA_PDMA 0xE9000000
-#define S5P_SZ_UART SZ_256
+#define S5P64X0_PA_TIMER 0xEA000000
+#define S5P64X0_PA_RTC 0xEA100000
+#define S5P64X0_PA_WDT 0xEA200000
-#define S5P6440_PA_IIC0 (0xEC104000)
-#define S5P6440_PA_IIC1 (0xEC20F000)
-#define S5P6450_PA_IIC0 (0xEC100000)
-#define S5P6450_PA_IIC1 (0xEC200000)
+#define S5P6440_PA_IIC0 0xEC104000
+#define S5P6440_PA_IIC1 0xEC20F000
+#define S5P6450_PA_IIC0 0xEC100000
+#define S5P6450_PA_IIC1 0xEC200000
-#define S5P64X0_PA_SPI0 (0xEC400000)
-#define S5P64X0_PA_SPI1 (0xEC500000)
+#define S5P64X0_PA_SPI0 0xEC400000
+#define S5P64X0_PA_SPI1 0xEC500000
-#define S5P64X0_PA_HSOTG (0xED100000)
+#define S5P64X0_PA_HSOTG 0xED100000
#define S5P64X0_PA_HSMMC(x) (0xED800000 + ((x) * 0x100000))
-#define S5P64X0_PA_I2S (0xF2000000)
+#define S5P64X0_PA_I2S 0xF2000000
#define S5P6450_PA_I2S1 0xF2800000
#define S5P6450_PA_I2S2 0xF2900000
-#define S5P64X0_PA_PCM (0xF2100000)
+#define S5P64X0_PA_PCM 0xF2100000
-#define S5P64X0_PA_ADC (0xF3000000)
+#define S5P64X0_PA_ADC 0xF3000000
-/* compatibiltiy defines. */
+/* Compatibiltiy Defines */
#define S3C_PA_HSMMC0 S5P64X0_PA_HSMMC(0)
#define S3C_PA_HSMMC1 S5P64X0_PA_HSMMC(1)
@@ -83,6 +65,25 @@
#define S3C_PA_RTC S5P64X0_PA_RTC
#define S3C_PA_WDT S5P64X0_PA_WDT
+#define S5P_PA_CHIPID S5P64X0_PA_CHIPID
+#define S5P_PA_SROMC S5P64X0_PA_SROMC
+#define S5P_PA_SYSCON S5P64X0_PA_SYSCON
+#define S5P_PA_TIMER S5P64X0_PA_TIMER
+
#define SAMSUNG_PA_ADC S5P64X0_PA_ADC
+/* UART */
+
+#define S5P6440_PA_UART(x) (0xEC000000 + ((x) * S3C_UART_OFFSET))
+#define S5P6450_PA_UART(x) ((x < 5) ? (0xEC800000 + ((x) * S3C_UART_OFFSET)) : (0xEC000000))
+
+#define S5P_PA_UART0 S5P6450_PA_UART(0)
+#define S5P_PA_UART1 S5P6450_PA_UART(1)
+#define S5P_PA_UART2 S5P6450_PA_UART(2)
+#define S5P_PA_UART3 S5P6450_PA_UART(3)
+#define S5P_PA_UART4 S5P6450_PA_UART(4)
+#define S5P_PA_UART5 S5P6450_PA_UART(5)
+
+#define S5P_SZ_UART SZ_256
+
#endif /* __ASM_ARCH_MAP_H */
diff --git a/trunk/arch/arm/mach-s5pc100/include/mach/map.h b/trunk/arch/arm/mach-s5pc100/include/mach/map.h
index 328467b346aa..ccbe6b767f7d 100644
--- a/trunk/arch/arm/mach-s5pc100/include/mach/map.h
+++ b/trunk/arch/arm/mach-s5pc100/include/mach/map.h
@@ -1,4 +1,7 @@
/* linux/arch/arm/mach-s5pc100/include/mach/map.h
+ *
+ * Copyright (c) 2011 Samsung Electronics Co., Ltd.
+ * http://www.samsung.com/
*
* Copyright 2009 Samsung Electronics Co.
* Byungho Min
@@ -16,145 +19,115 @@
#include
#include
-/*
- * map-base.h has already defined virtual memory address
- * S3C_VA_IRQ S3C_ADDR(0x00000000) irq controller(s)
- * S3C_VA_SYS S3C_ADDR(0x00100000) system control
- * S3C_VA_MEM S3C_ADDR(0x00200000) system control (not used)
- * S3C_VA_TIMER S3C_ADDR(0x00300000) timer block
- * S3C_VA_WATCHDOG S3C_ADDR(0x00400000) watchdog
- * S3C_VA_UART S3C_ADDR(0x01000000) UART
- *
- * S5PC100 specific virtual memory address can be defined here
- * S5PC1XX_VA_GPIO S3C_ADDR(0x00500000) GPIO
- *
- */
+#define S5PC100_PA_SDRAM 0x20000000
+
+#define S5PC100_PA_ONENAND 0xE7100000
+#define S5PC100_PA_ONENAND_BUF 0xB0000000
+
+#define S5PC100_PA_CHIPID 0xE0000000
-#define S5PC100_PA_ONENAND_BUF (0xB0000000)
-#define S5PC100_SZ_ONENAND_BUF (SZ_256M - SZ_32M)
+#define S5PC100_PA_SYSCON 0xE0100000
-/* Chip ID */
+#define S5PC100_PA_OTHERS 0xE0200000
-#define S5PC100_PA_CHIPID (0xE0000000)
-#define S5P_PA_CHIPID S5PC100_PA_CHIPID
+#define S5PC100_PA_GPIO 0xE0300000
-#define S5PC100_PA_SYSCON (0xE0100000)
-#define S5P_PA_SYSCON S5PC100_PA_SYSCON
+#define S5PC100_PA_VIC0 0xE4000000
+#define S5PC100_PA_VIC1 0xE4100000
+#define S5PC100_PA_VIC2 0xE4200000
-#define S5PC100_PA_OTHERS (0xE0200000)
-#define S5PC100_VA_OTHERS (S3C_VA_SYS + 0x10000)
+#define S5PC100_PA_SROMC 0xE7000000
-#define S5PC100_PA_GPIO (0xE0300000)
-#define S5PC1XX_VA_GPIO S3C_ADDR(0x00500000)
+#define S5PC100_PA_CFCON 0xE7800000
-/* Interrupt */
-#define S5PC100_PA_VIC0 (0xE4000000)
-#define S5PC100_PA_VIC1 (0xE4100000)
-#define S5PC100_PA_VIC2 (0xE4200000)
-#define S5PC100_VA_VIC S3C_VA_IRQ
-#define S5PC100_VA_VIC_OFFSET 0x10000
-#define S5PC1XX_VA_VIC(x) (S5PC100_VA_VIC + ((x) * S5PC100_VA_VIC_OFFSET))
+#define S5PC100_PA_MDMA 0xE8100000
+#define S5PC100_PA_PDMA0 0xE9000000
+#define S5PC100_PA_PDMA1 0xE9200000
-#define S5PC100_PA_SROMC (0xE7000000)
-#define S5P_PA_SROMC S5PC100_PA_SROMC
+#define S5PC100_PA_TIMER 0xEA000000
+#define S5PC100_PA_SYSTIMER 0xEA100000
+#define S5PC100_PA_WATCHDOG 0xEA200000
+#define S5PC100_PA_RTC 0xEA300000
-#define S5PC100_PA_ONENAND (0xE7100000)
+#define S5PC100_PA_UART 0xEC000000
-#define S5PC100_PA_CFCON (0xE7800000)
+#define S5PC100_PA_IIC0 0xEC100000
+#define S5PC100_PA_IIC1 0xEC200000
-/* DMA */
-#define S5PC100_PA_MDMA (0xE8100000)
-#define S5PC100_PA_PDMA0 (0xE9000000)
-#define S5PC100_PA_PDMA1 (0xE9200000)
+#define S5PC100_PA_SPI0 0xEC300000
+#define S5PC100_PA_SPI1 0xEC400000
+#define S5PC100_PA_SPI2 0xEC500000
-/* Timer */
-#define S5PC100_PA_TIMER (0xEA000000)
-#define S5P_PA_TIMER S5PC100_PA_TIMER
+#define S5PC100_PA_USB_HSOTG 0xED200000
+#define S5PC100_PA_USB_HSPHY 0xED300000
-#define S5PC100_PA_SYSTIMER (0xEA100000)
+#define S5PC100_PA_HSMMC(x) (0xED800000 + ((x) * 0x100000))
-#define S5PC100_PA_WATCHDOG (0xEA200000)
-#define S5PC100_PA_RTC (0xEA300000)
+#define S5PC100_PA_FB 0xEE000000
-#define S5PC100_PA_UART (0xEC000000)
+#define S5PC100_PA_FIMC0 0xEE200000
+#define S5PC100_PA_FIMC1 0xEE300000
+#define S5PC100_PA_FIMC2 0xEE400000
-#define S5P_PA_UART0 (S5PC100_PA_UART + 0x0)
-#define S5P_PA_UART1 (S5PC100_PA_UART + 0x400)
-#define S5P_PA_UART2 (S5PC100_PA_UART + 0x800)
-#define S5P_PA_UART3 (S5PC100_PA_UART + 0xC00)
-#define S5P_SZ_UART SZ_256
+#define S5PC100_PA_I2S0 0xF2000000
+#define S5PC100_PA_I2S1 0xF2100000
+#define S5PC100_PA_I2S2 0xF2200000
-#define S5PC100_PA_IIC0 (0xEC100000)
-#define S5PC100_PA_IIC1 (0xEC200000)
+#define S5PC100_PA_AC97 0xF2300000
-/* SPI */
-#define S5PC100_PA_SPI0 0xEC300000
-#define S5PC100_PA_SPI1 0xEC400000
-#define S5PC100_PA_SPI2 0xEC500000
+#define S5PC100_PA_PCM0 0xF2400000
+#define S5PC100_PA_PCM1 0xF2500000
-/* USB HS OTG */
-#define S5PC100_PA_USB_HSOTG (0xED200000)
-#define S5PC100_PA_USB_HSPHY (0xED300000)
+#define S5PC100_PA_SPDIF 0xF2600000
-#define S5PC100_PA_FB (0xEE000000)
+#define S5PC100_PA_TSADC 0xF3000000
-#define S5PC100_PA_FIMC0 (0xEE200000)
-#define S5PC100_PA_FIMC1 (0xEE300000)
-#define S5PC100_PA_FIMC2 (0xEE400000)
+#define S5PC100_PA_KEYPAD 0xF3100000
-#define S5PC100_PA_I2S0 (0xF2000000)
-#define S5PC100_PA_I2S1 (0xF2100000)
-#define S5PC100_PA_I2S2 (0xF2200000)
+/* Compatibiltiy Defines */
-#define S5PC100_PA_AC97 0xF2300000
+#define S3C_PA_FB S5PC100_PA_FB
+#define S3C_PA_HSMMC0 S5PC100_PA_HSMMC(0)
+#define S3C_PA_HSMMC1 S5PC100_PA_HSMMC(1)
+#define S3C_PA_HSMMC2 S5PC100_PA_HSMMC(2)
+#define S3C_PA_IIC S5PC100_PA_IIC0
+#define S3C_PA_IIC1 S5PC100_PA_IIC1
+#define S3C_PA_KEYPAD S5PC100_PA_KEYPAD
+#define S3C_PA_ONENAND S5PC100_PA_ONENAND
+#define S3C_PA_ONENAND_BUF S5PC100_PA_ONENAND_BUF
+#define S3C_PA_RTC S5PC100_PA_RTC
+#define S3C_PA_TSADC S5PC100_PA_TSADC
+#define S3C_PA_USB_HSOTG S5PC100_PA_USB_HSOTG
+#define S3C_PA_USB_HSPHY S5PC100_PA_USB_HSPHY
+#define S3C_PA_WDT S5PC100_PA_WATCHDOG
-/* PCM */
-#define S5PC100_PA_PCM0 0xF2400000
-#define S5PC100_PA_PCM1 0xF2500000
+#define S5P_PA_CHIPID S5PC100_PA_CHIPID
+#define S5P_PA_FIMC0 S5PC100_PA_FIMC0
+#define S5P_PA_FIMC1 S5PC100_PA_FIMC1
+#define S5P_PA_FIMC2 S5PC100_PA_FIMC2
+#define S5P_PA_SDRAM S5PC100_PA_SDRAM
+#define S5P_PA_SROMC S5PC100_PA_SROMC
+#define S5P_PA_SYSCON S5PC100_PA_SYSCON
+#define S5P_PA_TIMER S5PC100_PA_TIMER
-#define S5PC100_PA_SPDIF 0xF2600000
+#define SAMSUNG_PA_ADC S5PC100_PA_TSADC
+#define SAMSUNG_PA_CFCON S5PC100_PA_CFCON
+#define SAMSUNG_PA_KEYPAD S5PC100_PA_KEYPAD
-#define S5PC100_PA_TSADC (0xF3000000)
+#define S5PC100_VA_OTHERS (S3C_VA_SYS + 0x10000)
-/* KEYPAD */
-#define S5PC100_PA_KEYPAD (0xF3100000)
+#define S3C_SZ_ONENAND_BUF (SZ_256M - SZ_32M)
-#define S5PC100_PA_HSMMC(x) (0xED800000 + ((x) * 0x100000))
+/* UART */
-#define S5PC100_PA_SDRAM (0x20000000)
-#define S5P_PA_SDRAM S5PC100_PA_SDRAM
+#define S3C_PA_UART S5PC100_PA_UART
-/* compatibiltiy defines. */
-#define S3C_PA_UART S5PC100_PA_UART
-#define S3C_PA_IIC S5PC100_PA_IIC0
-#define S3C_PA_IIC1 S5PC100_PA_IIC1
-#define S3C_PA_FB S5PC100_PA_FB
-#define S3C_PA_G2D S5PC100_PA_G2D
-#define S3C_PA_G3D S5PC100_PA_G3D
-#define S3C_PA_JPEG S5PC100_PA_JPEG
-#define S3C_PA_ROTATOR S5PC100_PA_ROTATOR
-#define S5P_VA_VIC0 S5PC1XX_VA_VIC(0)
-#define S5P_VA_VIC1 S5PC1XX_VA_VIC(1)
-#define S5P_VA_VIC2 S5PC1XX_VA_VIC(2)
-#define S3C_PA_USB_HSOTG S5PC100_PA_USB_HSOTG
-#define S3C_PA_USB_HSPHY S5PC100_PA_USB_HSPHY
-#define S3C_PA_HSMMC0 S5PC100_PA_HSMMC(0)
-#define S3C_PA_HSMMC1 S5PC100_PA_HSMMC(1)
-#define S3C_PA_HSMMC2 S5PC100_PA_HSMMC(2)
-#define S3C_PA_KEYPAD S5PC100_PA_KEYPAD
-#define S3C_PA_WDT S5PC100_PA_WATCHDOG
-#define S3C_PA_TSADC S5PC100_PA_TSADC
-#define S3C_PA_ONENAND S5PC100_PA_ONENAND
-#define S3C_PA_ONENAND_BUF S5PC100_PA_ONENAND_BUF
-#define S3C_SZ_ONENAND_BUF S5PC100_SZ_ONENAND_BUF
-#define S3C_PA_RTC S5PC100_PA_RTC
-
-#define SAMSUNG_PA_ADC S5PC100_PA_TSADC
-#define SAMSUNG_PA_CFCON S5PC100_PA_CFCON
-#define SAMSUNG_PA_KEYPAD S5PC100_PA_KEYPAD
+#define S5P_PA_UART(x) (S3C_PA_UART + ((x) * S3C_UART_OFFSET))
+#define S5P_PA_UART0 S5P_PA_UART(0)
+#define S5P_PA_UART1 S5P_PA_UART(1)
+#define S5P_PA_UART2 S5P_PA_UART(2)
+#define S5P_PA_UART3 S5P_PA_UART(3)
-#define S5P_PA_FIMC0 S5PC100_PA_FIMC0
-#define S5P_PA_FIMC1 S5PC100_PA_FIMC1
-#define S5P_PA_FIMC2 S5PC100_PA_FIMC2
+#define S5P_SZ_UART SZ_256
-#endif /* __ASM_ARCH_C100_MAP_H */
+#endif /* __ASM_ARCH_MAP_H */
diff --git a/trunk/arch/arm/mach-s5pv210/include/mach/map.h b/trunk/arch/arm/mach-s5pv210/include/mach/map.h
index 3611492ad681..1dd58836fd4f 100644
--- a/trunk/arch/arm/mach-s5pv210/include/mach/map.h
+++ b/trunk/arch/arm/mach-s5pv210/include/mach/map.h
@@ -1,6 +1,6 @@
/* linux/arch/arm/mach-s5pv210/include/mach/map.h
*
- * Copyright (c) 2010 Samsung Electronics Co., Ltd.
+ * Copyright (c) 2010-2011 Samsung Electronics Co., Ltd.
* http://www.samsung.com/
*
* S5PV210 - Memory map definitions
@@ -16,122 +16,120 @@
#include
#include
-#define S5PV210_PA_SROM_BANK5 (0xA8000000)
+#define S5PV210_PA_SDRAM 0x20000000
-#define S5PC110_PA_ONENAND (0xB0000000)
-#define S5P_PA_ONENAND S5PC110_PA_ONENAND
+#define S5PV210_PA_SROM_BANK5 0xA8000000
-#define S5PC110_PA_ONENAND_DMA (0xB0600000)
-#define S5P_PA_ONENAND_DMA S5PC110_PA_ONENAND_DMA
+#define S5PC110_PA_ONENAND 0xB0000000
+#define S5PC110_PA_ONENAND_DMA 0xB0600000
-#define S5PV210_PA_CHIPID (0xE0000000)
-#define S5P_PA_CHIPID S5PV210_PA_CHIPID
+#define S5PV210_PA_CHIPID 0xE0000000
-#define S5PV210_PA_SYSCON (0xE0100000)
-#define S5P_PA_SYSCON S5PV210_PA_SYSCON
+#define S5PV210_PA_SYSCON 0xE0100000
-#define S5PV210_PA_GPIO (0xE0200000)
+#define S5PV210_PA_GPIO 0xE0200000
-/* SPI */
-#define S5PV210_PA_SPI0 0xE1300000
-#define S5PV210_PA_SPI1 0xE1400000
+#define S5PV210_PA_SPDIF 0xE1100000
-#define S5PV210_PA_KEYPAD (0xE1600000)
+#define S5PV210_PA_SPI0 0xE1300000
+#define S5PV210_PA_SPI1 0xE1400000
-#define S5PV210_PA_IIC0 (0xE1800000)
-#define S5PV210_PA_IIC1 (0xFAB00000)
-#define S5PV210_PA_IIC2 (0xE1A00000)
+#define S5PV210_PA_KEYPAD 0xE1600000
-#define S5PV210_PA_TIMER (0xE2500000)
-#define S5P_PA_TIMER S5PV210_PA_TIMER
+#define S5PV210_PA_ADC 0xE1700000
-#define S5PV210_PA_SYSTIMER (0xE2600000)
+#define S5PV210_PA_IIC0 0xE1800000
+#define S5PV210_PA_IIC1 0xFAB00000
+#define S5PV210_PA_IIC2 0xE1A00000
-#define S5PV210_PA_WATCHDOG (0xE2700000)
+#define S5PV210_PA_AC97 0xE2200000
-#define S5PV210_PA_RTC (0xE2800000)
-#define S5PV210_PA_UART (0xE2900000)
+#define S5PV210_PA_PCM0 0xE2300000
+#define S5PV210_PA_PCM1 0xE1200000
+#define S5PV210_PA_PCM2 0xE2B00000
-#define S5P_PA_UART0 (S5PV210_PA_UART + 0x0)
-#define S5P_PA_UART1 (S5PV210_PA_UART + 0x400)
-#define S5P_PA_UART2 (S5PV210_PA_UART + 0x800)
-#define S5P_PA_UART3 (S5PV210_PA_UART + 0xC00)
+#define S5PV210_PA_TIMER 0xE2500000
+#define S5PV210_PA_SYSTIMER 0xE2600000
+#define S5PV210_PA_WATCHDOG 0xE2700000
+#define S5PV210_PA_RTC 0xE2800000
-#define S5P_SZ_UART SZ_256
+#define S5PV210_PA_UART 0xE2900000
-#define S3C_VA_UARTx(x) (S3C_VA_UART + ((x) * S3C_UART_OFFSET))
+#define S5PV210_PA_SROMC 0xE8000000
-#define S5PV210_PA_SROMC (0xE8000000)
-#define S5P_PA_SROMC S5PV210_PA_SROMC
+#define S5PV210_PA_CFCON 0xE8200000
-#define S5PV210_PA_CFCON (0xE8200000)
+#define S5PV210_PA_HSMMC(x) (0xEB000000 + ((x) * 0x100000))
-#define S5PV210_PA_MDMA 0xFA200000
-#define S5PV210_PA_PDMA0 0xE0900000
-#define S5PV210_PA_PDMA1 0xE0A00000
+#define S5PV210_PA_HSOTG 0xEC000000
+#define S5PV210_PA_HSPHY 0xEC100000
-#define S5PV210_PA_FB (0xF8000000)
+#define S5PV210_PA_IIS0 0xEEE30000
+#define S5PV210_PA_IIS1 0xE2100000
+#define S5PV210_PA_IIS2 0xE2A00000
-#define S5PV210_PA_FIMC0 (0xFB200000)
-#define S5PV210_PA_FIMC1 (0xFB300000)
-#define S5PV210_PA_FIMC2 (0xFB400000)
+#define S5PV210_PA_DMC0 0xF0000000
+#define S5PV210_PA_DMC1 0xF1400000
-#define S5PV210_PA_HSMMC(x) (0xEB000000 + ((x) * 0x100000))
+#define S5PV210_PA_VIC0 0xF2000000
+#define S5PV210_PA_VIC1 0xF2100000
+#define S5PV210_PA_VIC2 0xF2200000
+#define S5PV210_PA_VIC3 0xF2300000
-#define S5PV210_PA_HSOTG (0xEC000000)
-#define S5PV210_PA_HSPHY (0xEC100000)
+#define S5PV210_PA_FB 0xF8000000
-#define S5PV210_PA_VIC0 (0xF2000000)
-#define S5PV210_PA_VIC1 (0xF2100000)
-#define S5PV210_PA_VIC2 (0xF2200000)
-#define S5PV210_PA_VIC3 (0xF2300000)
+#define S5PV210_PA_MDMA 0xFA200000
+#define S5PV210_PA_PDMA0 0xE0900000
+#define S5PV210_PA_PDMA1 0xE0A00000
-#define S5PV210_PA_SDRAM (0x20000000)
-#define S5P_PA_SDRAM S5PV210_PA_SDRAM
+#define S5PV210_PA_MIPI_CSIS 0xFA600000
-/* S/PDIF */
-#define S5PV210_PA_SPDIF 0xE1100000
+#define S5PV210_PA_FIMC0 0xFB200000
+#define S5PV210_PA_FIMC1 0xFB300000
+#define S5PV210_PA_FIMC2 0xFB400000
-/* I2S */
-#define S5PV210_PA_IIS0 0xEEE30000
-#define S5PV210_PA_IIS1 0xE2100000
-#define S5PV210_PA_IIS2 0xE2A00000
+/* Compatibiltiy Defines */
-/* PCM */
-#define S5PV210_PA_PCM0 0xE2300000
-#define S5PV210_PA_PCM1 0xE1200000
-#define S5PV210_PA_PCM2 0xE2B00000
+#define S3C_PA_FB S5PV210_PA_FB
+#define S3C_PA_HSMMC0 S5PV210_PA_HSMMC(0)
+#define S3C_PA_HSMMC1 S5PV210_PA_HSMMC(1)
+#define S3C_PA_HSMMC2 S5PV210_PA_HSMMC(2)
+#define S3C_PA_HSMMC3 S5PV210_PA_HSMMC(3)
+#define S3C_PA_IIC S5PV210_PA_IIC0
+#define S3C_PA_IIC1 S5PV210_PA_IIC1
+#define S3C_PA_IIC2 S5PV210_PA_IIC2
+#define S3C_PA_RTC S5PV210_PA_RTC
+#define S3C_PA_USB_HSOTG S5PV210_PA_HSOTG
+#define S3C_PA_WDT S5PV210_PA_WATCHDOG
-/* AC97 */
-#define S5PV210_PA_AC97 0xE2200000
+#define S5P_PA_CHIPID S5PV210_PA_CHIPID
+#define S5P_PA_FIMC0 S5PV210_PA_FIMC0
+#define S5P_PA_FIMC1 S5PV210_PA_FIMC1
+#define S5P_PA_FIMC2 S5PV210_PA_FIMC2
+#define S5P_PA_MIPI_CSIS0 S5PV210_PA_MIPI_CSIS
+#define S5P_PA_ONENAND S5PC110_PA_ONENAND
+#define S5P_PA_ONENAND_DMA S5PC110_PA_ONENAND_DMA
+#define S5P_PA_SDRAM S5PV210_PA_SDRAM
+#define S5P_PA_SROMC S5PV210_PA_SROMC
+#define S5P_PA_SYSCON S5PV210_PA_SYSCON
+#define S5P_PA_TIMER S5PV210_PA_TIMER
-#define S5PV210_PA_ADC (0xE1700000)
+#define SAMSUNG_PA_ADC S5PV210_PA_ADC
+#define SAMSUNG_PA_CFCON S5PV210_PA_CFCON
+#define SAMSUNG_PA_KEYPAD S5PV210_PA_KEYPAD
-#define S5PV210_PA_DMC0 (0xF0000000)
-#define S5PV210_PA_DMC1 (0xF1400000)
+/* UART */
-#define S5PV210_PA_MIPI_CSIS 0xFA600000
+#define S3C_VA_UARTx(x) (S3C_VA_UART + ((x) * S3C_UART_OFFSET))
-/* compatibiltiy defines. */
-#define S3C_PA_UART S5PV210_PA_UART
-#define S3C_PA_HSMMC0 S5PV210_PA_HSMMC(0)
-#define S3C_PA_HSMMC1 S5PV210_PA_HSMMC(1)
-#define S3C_PA_HSMMC2 S5PV210_PA_HSMMC(2)
-#define S3C_PA_HSMMC3 S5PV210_PA_HSMMC(3)
-#define S3C_PA_IIC S5PV210_PA_IIC0
-#define S3C_PA_IIC1 S5PV210_PA_IIC1
-#define S3C_PA_IIC2 S5PV210_PA_IIC2
-#define S3C_PA_FB S5PV210_PA_FB
-#define S3C_PA_RTC S5PV210_PA_RTC
-#define S3C_PA_WDT S5PV210_PA_WATCHDOG
-#define S3C_PA_USB_HSOTG S5PV210_PA_HSOTG
-#define S5P_PA_FIMC0 S5PV210_PA_FIMC0
-#define S5P_PA_FIMC1 S5PV210_PA_FIMC1
-#define S5P_PA_FIMC2 S5PV210_PA_FIMC2
-#define S5P_PA_MIPI_CSIS0 S5PV210_PA_MIPI_CSIS
+#define S3C_PA_UART S5PV210_PA_UART
-#define SAMSUNG_PA_ADC S5PV210_PA_ADC
-#define SAMSUNG_PA_CFCON S5PV210_PA_CFCON
-#define SAMSUNG_PA_KEYPAD S5PV210_PA_KEYPAD
+#define S5P_PA_UART(x) (S3C_PA_UART + ((x) * S3C_UART_OFFSET))
+#define S5P_PA_UART0 S5P_PA_UART(0)
+#define S5P_PA_UART1 S5P_PA_UART(1)
+#define S5P_PA_UART2 S5P_PA_UART(2)
+#define S5P_PA_UART3 S5P_PA_UART(3)
+
+#define S5P_SZ_UART SZ_256
#endif /* __ASM_ARCH_MAP_H */
diff --git a/trunk/arch/arm/mach-s5pv210/mach-aquila.c b/trunk/arch/arm/mach-s5pv210/mach-aquila.c
index 461aa035afc0..557add4fc56c 100644
--- a/trunk/arch/arm/mach-s5pv210/mach-aquila.c
+++ b/trunk/arch/arm/mach-s5pv210/mach-aquila.c
@@ -149,7 +149,7 @@ static struct regulator_init_data aquila_ldo2_data = {
static struct regulator_init_data aquila_ldo3_data = {
.constraints = {
- .name = "VUSB/MIPI_1.1V",
+ .name = "VUSB+MIPI_1.1V",
.min_uV = 1100000,
.max_uV = 1100000,
.apply_uV = 1,
@@ -197,7 +197,7 @@ static struct regulator_init_data aquila_ldo7_data = {
static struct regulator_init_data aquila_ldo8_data = {
.constraints = {
- .name = "VUSB/VADC_3.3V",
+ .name = "VUSB+VADC_3.3V",
.min_uV = 3300000,
.max_uV = 3300000,
.apply_uV = 1,
@@ -207,7 +207,7 @@ static struct regulator_init_data aquila_ldo8_data = {
static struct regulator_init_data aquila_ldo9_data = {
.constraints = {
- .name = "VCC/VCAM_2.8V",
+ .name = "VCC+VCAM_2.8V",
.min_uV = 2800000,
.max_uV = 2800000,
.apply_uV = 1,
@@ -381,9 +381,12 @@ static struct max8998_platform_data aquila_max8998_pdata = {
.buck1_set1 = S5PV210_GPH0(3),
.buck1_set2 = S5PV210_GPH0(4),
.buck2_set3 = S5PV210_GPH0(5),
- .buck1_max_voltage1 = 1200000,
- .buck1_max_voltage2 = 1200000,
- .buck2_max_voltage = 1200000,
+ .buck1_voltage1 = 1200000,
+ .buck1_voltage2 = 1200000,
+ .buck1_voltage3 = 1200000,
+ .buck1_voltage4 = 1200000,
+ .buck2_voltage1 = 1200000,
+ .buck2_voltage2 = 1200000,
};
#endif
diff --git a/trunk/arch/arm/mach-s5pv210/mach-goni.c b/trunk/arch/arm/mach-s5pv210/mach-goni.c
index e22d5112fd44..056f5c769b0a 100644
--- a/trunk/arch/arm/mach-s5pv210/mach-goni.c
+++ b/trunk/arch/arm/mach-s5pv210/mach-goni.c
@@ -288,7 +288,7 @@ static struct regulator_init_data goni_ldo2_data = {
static struct regulator_init_data goni_ldo3_data = {
.constraints = {
- .name = "VUSB/MIPI_1.1V",
+ .name = "VUSB+MIPI_1.1V",
.min_uV = 1100000,
.max_uV = 1100000,
.apply_uV = 1,
@@ -337,7 +337,7 @@ static struct regulator_init_data goni_ldo7_data = {
static struct regulator_init_data goni_ldo8_data = {
.constraints = {
- .name = "VUSB/VADC_3.3V",
+ .name = "VUSB+VADC_3.3V",
.min_uV = 3300000,
.max_uV = 3300000,
.apply_uV = 1,
@@ -347,7 +347,7 @@ static struct regulator_init_data goni_ldo8_data = {
static struct regulator_init_data goni_ldo9_data = {
.constraints = {
- .name = "VCC/VCAM_2.8V",
+ .name = "VCC+VCAM_2.8V",
.min_uV = 2800000,
.max_uV = 2800000,
.apply_uV = 1,
@@ -521,9 +521,12 @@ static struct max8998_platform_data goni_max8998_pdata = {
.buck1_set1 = S5PV210_GPH0(3),
.buck1_set2 = S5PV210_GPH0(4),
.buck2_set3 = S5PV210_GPH0(5),
- .buck1_max_voltage1 = 1200000,
- .buck1_max_voltage2 = 1200000,
- .buck2_max_voltage = 1200000,
+ .buck1_voltage1 = 1200000,
+ .buck1_voltage2 = 1200000,
+ .buck1_voltage3 = 1200000,
+ .buck1_voltage4 = 1200000,
+ .buck2_voltage1 = 1200000,
+ .buck2_voltage2 = 1200000,
};
#endif
diff --git a/trunk/arch/arm/mach-s5pv310/include/mach/map.h b/trunk/arch/arm/mach-s5pv310/include/mach/map.h
index 3060f78e12ab..901657fa7a12 100644
--- a/trunk/arch/arm/mach-s5pv310/include/mach/map.h
+++ b/trunk/arch/arm/mach-s5pv310/include/mach/map.h
@@ -1,6 +1,6 @@
/* linux/arch/arm/mach-s5pv310/include/mach/map.h
*
- * Copyright (c) 2010 Samsung Electronics Co., Ltd.
+ * Copyright (c) 2010-2011 Samsung Electronics Co., Ltd.
* http://www.samsung.com/
*
* S5PV310 - Memory map definitions
@@ -23,90 +23,43 @@
#include
-#define S5PV310_PA_SYSRAM (0x02025000)
+#define S5PV310_PA_SYSRAM 0x02025000
-#define S5PV310_PA_SROM_BANK(x) (0x04000000 + ((x) * 0x01000000))
-
-#define S5PC210_PA_ONENAND (0x0C000000)
-#define S5P_PA_ONENAND S5PC210_PA_ONENAND
-
-#define S5PC210_PA_ONENAND_DMA (0x0C600000)
-#define S5P_PA_ONENAND_DMA S5PC210_PA_ONENAND_DMA
-
-#define S5PV310_PA_CHIPID (0x10000000)
-#define S5P_PA_CHIPID S5PV310_PA_CHIPID
-
-#define S5PV310_PA_SYSCON (0x10010000)
-#define S5P_PA_SYSCON S5PV310_PA_SYSCON
+#define S5PV310_PA_I2S0 0x03830000
+#define S5PV310_PA_I2S1 0xE3100000
+#define S5PV310_PA_I2S2 0xE2A00000
-#define S5PV310_PA_PMU (0x10020000)
+#define S5PV310_PA_PCM0 0x03840000
+#define S5PV310_PA_PCM1 0x13980000
+#define S5PV310_PA_PCM2 0x13990000
-#define S5PV310_PA_CMU (0x10030000)
-
-#define S5PV310_PA_WATCHDOG (0x10060000)
-#define S5PV310_PA_RTC (0x10070000)
-
-#define S5PV310_PA_DMC0 (0x10400000)
-
-#define S5PV310_PA_COMBINER (0x10448000)
-
-#define S5PV310_PA_COREPERI (0x10500000)
-#define S5PV310_PA_GIC_CPU (0x10500100)
-#define S5PV310_PA_TWD (0x10500600)
-#define S5PV310_PA_GIC_DIST (0x10501000)
-#define S5PV310_PA_L2CC (0x10502000)
-
-/* DMA */
-#define S5PV310_PA_MDMA 0x10810000
-#define S5PV310_PA_PDMA0 0x12680000
-#define S5PV310_PA_PDMA1 0x12690000
-
-#define S5PV310_PA_GPIO1 (0x11400000)
-#define S5PV310_PA_GPIO2 (0x11000000)
-#define S5PV310_PA_GPIO3 (0x03860000)
-
-#define S5PV310_PA_MIPI_CSIS0 0x11880000
-#define S5PV310_PA_MIPI_CSIS1 0x11890000
+#define S5PV310_PA_SROM_BANK(x) (0x04000000 + ((x) * 0x01000000))
-#define S5PV310_PA_HSMMC(x) (0x12510000 + ((x) * 0x10000))
+#define S5PC210_PA_ONENAND 0x0C000000
+#define S5PC210_PA_ONENAND_DMA 0x0C600000
-#define S5PV310_PA_SROMC (0x12570000)
-#define S5P_PA_SROMC S5PV310_PA_SROMC
+#define S5PV310_PA_CHIPID 0x10000000
-/* S/PDIF */
-#define S5PV310_PA_SPDIF 0xE1100000
+#define S5PV310_PA_SYSCON 0x10010000
+#define S5PV310_PA_PMU 0x10020000
+#define S5PV310_PA_CMU 0x10030000
-/* I2S */
-#define S5PV310_PA_I2S0 0x03830000
-#define S5PV310_PA_I2S1 0xE3100000
-#define S5PV310_PA_I2S2 0xE2A00000
+#define S5PV310_PA_WATCHDOG 0x10060000
+#define S5PV310_PA_RTC 0x10070000
-/* PCM */
-#define S5PV310_PA_PCM0 0x03840000
-#define S5PV310_PA_PCM1 0x13980000
-#define S5PV310_PA_PCM2 0x13990000
+#define S5PV310_PA_DMC0 0x10400000
-/* AC97 */
-#define S5PV310_PA_AC97 0x139A0000
+#define S5PV310_PA_COMBINER 0x10448000
-#define S5PV310_PA_UART (0x13800000)
+#define S5PV310_PA_COREPERI 0x10500000
+#define S5PV310_PA_GIC_CPU 0x10500100
+#define S5PV310_PA_TWD 0x10500600
+#define S5PV310_PA_GIC_DIST 0x10501000
+#define S5PV310_PA_L2CC 0x10502000
-#define S5P_PA_UART(x) (S5PV310_PA_UART + ((x) * S3C_UART_OFFSET))
-#define S5P_PA_UART0 S5P_PA_UART(0)
-#define S5P_PA_UART1 S5P_PA_UART(1)
-#define S5P_PA_UART2 S5P_PA_UART(2)
-#define S5P_PA_UART3 S5P_PA_UART(3)
-#define S5P_PA_UART4 S5P_PA_UART(4)
-
-#define S5P_SZ_UART SZ_256
-
-#define S5PV310_PA_IIC(x) (0x13860000 + ((x) * 0x10000))
-
-#define S5PV310_PA_TIMER (0x139D0000)
-#define S5P_PA_TIMER S5PV310_PA_TIMER
-
-#define S5PV310_PA_SDRAM (0x40000000)
-#define S5P_PA_SDRAM S5PV310_PA_SDRAM
+#define S5PV310_PA_MDMA 0x10810000
+#define S5PV310_PA_PDMA0 0x12680000
+#define S5PV310_PA_PDMA1 0x12690000
#define S5PV310_PA_SYSMMU_MDMA 0x10A40000
#define S5PV310_PA_SYSMMU_SSS 0x10A50000
@@ -125,8 +78,31 @@
#define S5PV310_PA_SYSMMU_MFC_L 0x13620000
#define S5PV310_PA_SYSMMU_MFC_R 0x13630000
-/* compatibiltiy defines. */
-#define S3C_PA_UART S5PV310_PA_UART
+#define S5PV310_PA_GPIO1 0x11400000
+#define S5PV310_PA_GPIO2 0x11000000
+#define S5PV310_PA_GPIO3 0x03860000
+
+#define S5PV310_PA_MIPI_CSIS0 0x11880000
+#define S5PV310_PA_MIPI_CSIS1 0x11890000
+
+#define S5PV310_PA_HSMMC(x) (0x12510000 + ((x) * 0x10000))
+
+#define S5PV310_PA_SROMC 0x12570000
+
+#define S5PV310_PA_UART 0x13800000
+
+#define S5PV310_PA_IIC(x) (0x13860000 + ((x) * 0x10000))
+
+#define S5PV310_PA_AC97 0x139A0000
+
+#define S5PV310_PA_TIMER 0x139D0000
+
+#define S5PV310_PA_SDRAM 0x40000000
+
+#define S5PV310_PA_SPDIF 0xE1100000
+
+/* Compatibiltiy Defines */
+
#define S3C_PA_HSMMC0 S5PV310_PA_HSMMC(0)
#define S3C_PA_HSMMC1 S5PV310_PA_HSMMC(1)
#define S3C_PA_HSMMC2 S5PV310_PA_HSMMC(2)
@@ -141,7 +117,28 @@
#define S3C_PA_IIC7 S5PV310_PA_IIC(7)
#define S3C_PA_RTC S5PV310_PA_RTC
#define S3C_PA_WDT S5PV310_PA_WATCHDOG
+
+#define S5P_PA_CHIPID S5PV310_PA_CHIPID
#define S5P_PA_MIPI_CSIS0 S5PV310_PA_MIPI_CSIS0
#define S5P_PA_MIPI_CSIS1 S5PV310_PA_MIPI_CSIS1
+#define S5P_PA_ONENAND S5PC210_PA_ONENAND
+#define S5P_PA_ONENAND_DMA S5PC210_PA_ONENAND_DMA
+#define S5P_PA_SDRAM S5PV310_PA_SDRAM
+#define S5P_PA_SROMC S5PV310_PA_SROMC
+#define S5P_PA_SYSCON S5PV310_PA_SYSCON
+#define S5P_PA_TIMER S5PV310_PA_TIMER
+
+/* UART */
+
+#define S3C_PA_UART S5PV310_PA_UART
+
+#define S5P_PA_UART(x) (S3C_PA_UART + ((x) * S3C_UART_OFFSET))
+#define S5P_PA_UART0 S5P_PA_UART(0)
+#define S5P_PA_UART1 S5P_PA_UART(1)
+#define S5P_PA_UART2 S5P_PA_UART(2)
+#define S5P_PA_UART3 S5P_PA_UART(3)
+#define S5P_PA_UART4 S5P_PA_UART(4)
+
+#define S5P_SZ_UART SZ_256
#endif /* __ASM_ARCH_MAP_H */
diff --git a/trunk/arch/arm/mach-shmobile/board-ag5evm.c b/trunk/arch/arm/mach-shmobile/board-ag5evm.c
index 2123b96b5638..4303a86e6e38 100644
--- a/trunk/arch/arm/mach-shmobile/board-ag5evm.c
+++ b/trunk/arch/arm/mach-shmobile/board-ag5evm.c
@@ -454,6 +454,7 @@ static void __init ag5evm_init(void)
gpio_direction_output(GPIO_PORT217, 0);
mdelay(1);
gpio_set_value(GPIO_PORT217, 1);
+ mdelay(100);
/* LCD backlight controller */
gpio_request(GPIO_PORT235, NULL); /* RESET */
diff --git a/trunk/arch/arm/mach-shmobile/board-ap4evb.c b/trunk/arch/arm/mach-shmobile/board-ap4evb.c
index 3cf0951caa2d..81d6536552a9 100644
--- a/trunk/arch/arm/mach-shmobile/board-ap4evb.c
+++ b/trunk/arch/arm/mach-shmobile/board-ap4evb.c
@@ -1303,7 +1303,7 @@ static void __init ap4evb_init(void)
lcdc_info.clock_source = LCDC_CLK_BUS;
lcdc_info.ch[0].interface_type = RGB18;
- lcdc_info.ch[0].clock_divider = 2;
+ lcdc_info.ch[0].clock_divider = 3;
lcdc_info.ch[0].flags = 0;
lcdc_info.ch[0].lcd_size_cfg.width = 152;
lcdc_info.ch[0].lcd_size_cfg.height = 91;
diff --git a/trunk/arch/arm/mach-shmobile/board-mackerel.c b/trunk/arch/arm/mach-shmobile/board-mackerel.c
index fb4213a4e15a..1657eac5dde2 100644
--- a/trunk/arch/arm/mach-shmobile/board-mackerel.c
+++ b/trunk/arch/arm/mach-shmobile/board-mackerel.c
@@ -303,7 +303,7 @@ static struct sh_mobile_lcdc_info lcdc_info = {
.lcd_cfg = mackerel_lcdc_modes,
.num_cfg = ARRAY_SIZE(mackerel_lcdc_modes),
.interface_type = RGB24,
- .clock_divider = 2,
+ .clock_divider = 3,
.flags = 0,
.lcd_size_cfg.width = 152,
.lcd_size_cfg.height = 91,
diff --git a/trunk/arch/arm/mach-shmobile/clock-sh73a0.c b/trunk/arch/arm/mach-shmobile/clock-sh73a0.c
index ddd4a1b775f0..7e58904c1c8c 100644
--- a/trunk/arch/arm/mach-shmobile/clock-sh73a0.c
+++ b/trunk/arch/arm/mach-shmobile/clock-sh73a0.c
@@ -263,7 +263,7 @@ static struct clk div6_clks[DIV6_NR] = {
};
enum { MSTP001,
- MSTP125, MSTP118, MSTP116, MSTP100,
+ MSTP129, MSTP128, MSTP127, MSTP126, MSTP125, MSTP118, MSTP116, MSTP100,
MSTP219,
MSTP207, MSTP206, MSTP204, MSTP203, MSTP202, MSTP201, MSTP200,
MSTP331, MSTP329, MSTP325, MSTP323, MSTP312,
@@ -275,6 +275,10 @@ enum { MSTP001,
static struct clk mstp_clks[MSTP_NR] = {
[MSTP001] = MSTP(&div4_clks[DIV4_HP], SMSTPCR0, 1, 0), /* IIC2 */
+ [MSTP129] = MSTP(&div4_clks[DIV4_B], SMSTPCR1, 29, 0), /* CEU1 */
+ [MSTP128] = MSTP(&div4_clks[DIV4_B], SMSTPCR1, 28, 0), /* CSI2-RX1 */
+ [MSTP127] = MSTP(&div4_clks[DIV4_B], SMSTPCR1, 27, 0), /* CEU0 */
+ [MSTP126] = MSTP(&div4_clks[DIV4_B], SMSTPCR1, 26, 0), /* CSI2-RX0 */
[MSTP125] = MSTP(&div6_clks[DIV6_SUB], SMSTPCR1, 25, 0), /* TMU0 */
[MSTP118] = MSTP(&div4_clks[DIV4_B], SMSTPCR1, 18, 0), /* DSITX0 */
[MSTP116] = MSTP(&div4_clks[DIV4_HP], SMSTPCR1, 16, 0), /* IIC0 */
@@ -306,6 +310,9 @@ static struct clk_lookup lookups[] = {
CLKDEV_CON_ID("r_clk", &r_clk),
/* DIV6 clocks */
+ CLKDEV_CON_ID("vck1_clk", &div6_clks[DIV6_VCK1]),
+ CLKDEV_CON_ID("vck2_clk", &div6_clks[DIV6_VCK2]),
+ CLKDEV_CON_ID("vck3_clk", &div6_clks[DIV6_VCK3]),
CLKDEV_ICK_ID("dsit_clk", "sh-mipi-dsi.0", &div6_clks[DIV6_DSIT]),
CLKDEV_ICK_ID("dsit_clk", "sh-mipi-dsi.1", &div6_clks[DIV6_DSIT]),
CLKDEV_ICK_ID("dsi0p_clk", "sh-mipi-dsi.0", &div6_clks[DIV6_DSI0P]),
@@ -313,11 +320,15 @@ static struct clk_lookup lookups[] = {
/* MSTP32 clocks */
CLKDEV_DEV_ID("i2c-sh_mobile.2", &mstp_clks[MSTP001]), /* I2C2 */
- CLKDEV_DEV_ID("sh_mobile_lcdc_fb.0", &mstp_clks[MSTP100]), /* LCDC0 */
+ CLKDEV_DEV_ID("sh_mobile_ceu.1", &mstp_clks[MSTP129]), /* CEU1 */
+ CLKDEV_DEV_ID("sh-mobile-csi2.1", &mstp_clks[MSTP128]), /* CSI2-RX1 */
+ CLKDEV_DEV_ID("sh_mobile_ceu.0", &mstp_clks[MSTP127]), /* CEU0 */
+ CLKDEV_DEV_ID("sh-mobile-csi2.0", &mstp_clks[MSTP126]), /* CSI2-RX0 */
CLKDEV_DEV_ID("sh_tmu.0", &mstp_clks[MSTP125]), /* TMU00 */
CLKDEV_DEV_ID("sh_tmu.1", &mstp_clks[MSTP125]), /* TMU01 */
- CLKDEV_DEV_ID("i2c-sh_mobile.0", &mstp_clks[MSTP116]), /* I2C0 */
CLKDEV_DEV_ID("sh-mipi-dsi.0", &mstp_clks[MSTP118]), /* DSITX */
+ CLKDEV_DEV_ID("i2c-sh_mobile.0", &mstp_clks[MSTP116]), /* I2C0 */
+ CLKDEV_DEV_ID("sh_mobile_lcdc_fb.0", &mstp_clks[MSTP100]), /* LCDC0 */
CLKDEV_DEV_ID("sh-sci.7", &mstp_clks[MSTP219]), /* SCIFA7 */
CLKDEV_DEV_ID("sh-sci.5", &mstp_clks[MSTP207]), /* SCIFA5 */
CLKDEV_DEV_ID("sh-sci.8", &mstp_clks[MSTP206]), /* SCIFB */
diff --git a/trunk/arch/arm/mach-shmobile/include/mach/head-ap4evb.txt b/trunk/arch/arm/mach-shmobile/include/mach/head-ap4evb.txt
index efd3687ba190..3029aba38688 100644
--- a/trunk/arch/arm/mach-shmobile/include/mach/head-ap4evb.txt
+++ b/trunk/arch/arm/mach-shmobile/include/mach/head-ap4evb.txt
@@ -6,13 +6,10 @@ LIST "RWT Setting"
EW 0xE6020004, 0xA500
EW 0xE6030004, 0xA500
-DD 0x01001000, 0x01001000
-
LIST "GPIO Setting"
EB 0xE6051013, 0xA2
LIST "CPG"
-ED 0xE6150080, 0x00000180
ED 0xE61500C0, 0x00000002
WAIT 1, 0xFE40009C
@@ -37,6 +34,9 @@ ED 0xE615002C, 0x93000040
WAIT 1, 0xFE40009C
+LIST "SUB/USBClk"
+ED 0xE6150080, 0x00000180
+
LIST "BSC"
ED 0xFEC10000, 0x00E0001B
@@ -53,7 +53,7 @@ ED 0xFE400048, 0x20C18505
ED 0xFE40004C, 0x00110209
ED 0xFE400010, 0x00000087
-WAIT 10, 0xFE40009C
+WAIT 30, 0xFE40009C
ED 0xFE400084, 0x0000003F
EB 0xFE500000, 0x00
@@ -84,7 +84,7 @@ ED 0xE6150004, 0x80331050
WAIT 1, 0xFE40009C
-ED 0xE6150354, 0x00000002
+ED 0xFE400354, 0x01AD8002
LIST "SCIF0 - Serial port for earlyprintk"
EB 0xE6053098, 0x11
diff --git a/trunk/arch/arm/mach-shmobile/include/mach/head-mackerel.txt b/trunk/arch/arm/mach-shmobile/include/mach/head-mackerel.txt
index efd3687ba190..3029aba38688 100644
--- a/trunk/arch/arm/mach-shmobile/include/mach/head-mackerel.txt
+++ b/trunk/arch/arm/mach-shmobile/include/mach/head-mackerel.txt
@@ -6,13 +6,10 @@ LIST "RWT Setting"
EW 0xE6020004, 0xA500
EW 0xE6030004, 0xA500
-DD 0x01001000, 0x01001000
-
LIST "GPIO Setting"
EB 0xE6051013, 0xA2
LIST "CPG"
-ED 0xE6150080, 0x00000180
ED 0xE61500C0, 0x00000002
WAIT 1, 0xFE40009C
@@ -37,6 +34,9 @@ ED 0xE615002C, 0x93000040
WAIT 1, 0xFE40009C
+LIST "SUB/USBClk"
+ED 0xE6150080, 0x00000180
+
LIST "BSC"
ED 0xFEC10000, 0x00E0001B
@@ -53,7 +53,7 @@ ED 0xFE400048, 0x20C18505
ED 0xFE40004C, 0x00110209
ED 0xFE400010, 0x00000087
-WAIT 10, 0xFE40009C
+WAIT 30, 0xFE40009C
ED 0xFE400084, 0x0000003F
EB 0xFE500000, 0x00
@@ -84,7 +84,7 @@ ED 0xE6150004, 0x80331050
WAIT 1, 0xFE40009C
-ED 0xE6150354, 0x00000002
+ED 0xFE400354, 0x01AD8002
LIST "SCIF0 - Serial port for earlyprintk"
EB 0xE6053098, 0x11
diff --git a/trunk/arch/arm/mach-spear3xx/include/mach/spear320.h b/trunk/arch/arm/mach-spear3xx/include/mach/spear320.h
index cacf17a958cd..53677e464d4b 100644
--- a/trunk/arch/arm/mach-spear3xx/include/mach/spear320.h
+++ b/trunk/arch/arm/mach-spear3xx/include/mach/spear320.h
@@ -62,7 +62,7 @@
#define SPEAR320_SMII1_BASE 0xAB000000
#define SPEAR320_SMII1_SIZE 0x01000000
-#define SPEAR320_SOC_CONFIG_BASE 0xB4000000
+#define SPEAR320_SOC_CONFIG_BASE 0xB3000000
#define SPEAR320_SOC_CONFIG_SIZE 0x00000070
/* Interrupt registers offsets and masks */
#define INT_STS_MASK_REG 0x04
diff --git a/trunk/arch/arm/mach-tegra/include/mach/kbc.h b/trunk/arch/arm/mach-tegra/include/mach/kbc.h
index 66ad2760c621..04c779832c78 100644
--- a/trunk/arch/arm/mach-tegra/include/mach/kbc.h
+++ b/trunk/arch/arm/mach-tegra/include/mach/kbc.h
@@ -57,5 +57,6 @@ struct tegra_kbc_platform_data {
const struct matrix_keymap_data *keymap_data;
bool wakeup;
+ bool use_fn_map;
};
#endif
diff --git a/trunk/arch/arm/mm/cache-l2x0.c b/trunk/arch/arm/mm/cache-l2x0.c
index 170c9bb95866..f2ce38e085d2 100644
--- a/trunk/arch/arm/mm/cache-l2x0.c
+++ b/trunk/arch/arm/mm/cache-l2x0.c
@@ -49,7 +49,13 @@ static inline void cache_wait(void __iomem *reg, unsigned long mask)
static inline void cache_sync(void)
{
void __iomem *base = l2x0_base;
+
+#ifdef CONFIG_ARM_ERRATA_753970
+ /* write to an unmmapped register */
+ writel_relaxed(0, base + L2X0_DUMMY_REG);
+#else
writel_relaxed(0, base + L2X0_CACHE_SYNC);
+#endif
cache_wait(base + L2X0_CACHE_SYNC, 1);
}
diff --git a/trunk/arch/arm/mm/proc-v7.S b/trunk/arch/arm/mm/proc-v7.S
index 0c1172b56b4e..8e3356239136 100644
--- a/trunk/arch/arm/mm/proc-v7.S
+++ b/trunk/arch/arm/mm/proc-v7.S
@@ -264,6 +264,12 @@ __v7_setup:
orreq r10, r10, #1 << 6 @ set bit #6
mcreq p15, 0, r10, c15, c0, 1 @ write diagnostic register
#endif
+#ifdef CONFIG_ARM_ERRATA_751472
+ cmp r6, #0x30 @ present prior to r3p0
+ mrclt p15, 0, r10, c15, c0, 1 @ read diagnostic register
+ orrlt r10, r10, #1 << 11 @ set bit #11
+ mcrlt p15, 0, r10, c15, c0, 1 @ write diagnostic register
+#endif
3: mov r10, #0
#ifdef HARVARD_CACHE
diff --git a/trunk/arch/arm/plat-omap/mailbox.c b/trunk/arch/arm/plat-omap/mailbox.c
index 459b319a9fad..69ddc9f76c13 100644
--- a/trunk/arch/arm/plat-omap/mailbox.c
+++ b/trunk/arch/arm/plat-omap/mailbox.c
@@ -32,7 +32,6 @@
#include
-static struct workqueue_struct *mboxd;
static struct omap_mbox **mboxes;
static int mbox_configured;
@@ -197,7 +196,7 @@ static void __mbox_rx_interrupt(struct omap_mbox *mbox)
/* no more messages in the fifo. clear IRQ source. */
ack_mbox_irq(mbox, IRQ_RX);
nomem:
- queue_work(mboxd, &mbox->rxq->work);
+ schedule_work(&mbox->rxq->work);
}
static irqreturn_t mbox_interrupt(int irq, void *p)
@@ -307,7 +306,7 @@ static void omap_mbox_fini(struct omap_mbox *mbox)
if (!--mbox->use_count) {
free_irq(mbox->irq, mbox);
tasklet_kill(&mbox->txq->tasklet);
- flush_work(&mbox->rxq->work);
+ flush_work_sync(&mbox->rxq->work);
mbox_queue_free(mbox->txq);
mbox_queue_free(mbox->rxq);
}
@@ -322,15 +321,18 @@ static void omap_mbox_fini(struct omap_mbox *mbox)
struct omap_mbox *omap_mbox_get(const char *name, struct notifier_block *nb)
{
- struct omap_mbox *mbox;
- int ret;
+ struct omap_mbox *_mbox, *mbox = NULL;
+ int i, ret;
if (!mboxes)
return ERR_PTR(-EINVAL);
- for (mbox = *mboxes; mbox; mbox++)
- if (!strcmp(mbox->name, name))
+ for (i = 0; (_mbox = mboxes[i]); i++) {
+ if (!strcmp(_mbox->name, name)) {
+ mbox = _mbox;
break;
+ }
+ }
if (!mbox)
return ERR_PTR(-ENOENT);
@@ -406,10 +408,6 @@ static int __init omap_mbox_init(void)
if (err)
return err;
- mboxd = create_workqueue("mboxd");
- if (!mboxd)
- return -ENOMEM;
-
/* kfifo size sanity check: alignment and minimal size */
mbox_kfifo_size = ALIGN(mbox_kfifo_size, sizeof(mbox_msg_t));
mbox_kfifo_size = max_t(unsigned int, mbox_kfifo_size,
@@ -421,7 +419,6 @@ subsys_initcall(omap_mbox_init);
static void __exit omap_mbox_exit(void)
{
- destroy_workqueue(mboxd);
class_unregister(&omap_mbox_class);
}
module_exit(omap_mbox_exit);
diff --git a/trunk/arch/arm/plat-s5p/dev-uart.c b/trunk/arch/arm/plat-s5p/dev-uart.c
index 6a7342886171..afaf87fdb93e 100644
--- a/trunk/arch/arm/plat-s5p/dev-uart.c
+++ b/trunk/arch/arm/plat-s5p/dev-uart.c
@@ -28,7 +28,7 @@
static struct resource s5p_uart0_resource[] = {
[0] = {
.start = S5P_PA_UART0,
- .end = S5P_PA_UART0 + S5P_SZ_UART,
+ .end = S5P_PA_UART0 + S5P_SZ_UART - 1,
.flags = IORESOURCE_MEM,
},
[1] = {
@@ -51,7 +51,7 @@ static struct resource s5p_uart0_resource[] = {
static struct resource s5p_uart1_resource[] = {
[0] = {
.start = S5P_PA_UART1,
- .end = S5P_PA_UART1 + S5P_SZ_UART,
+ .end = S5P_PA_UART1 + S5P_SZ_UART - 1,
.flags = IORESOURCE_MEM,
},
[1] = {
@@ -74,7 +74,7 @@ static struct resource s5p_uart1_resource[] = {
static struct resource s5p_uart2_resource[] = {
[0] = {
.start = S5P_PA_UART2,
- .end = S5P_PA_UART2 + S5P_SZ_UART,
+ .end = S5P_PA_UART2 + S5P_SZ_UART - 1,
.flags = IORESOURCE_MEM,
},
[1] = {
@@ -98,7 +98,7 @@ static struct resource s5p_uart3_resource[] = {
#if CONFIG_SERIAL_SAMSUNG_UARTS > 3
[0] = {
.start = S5P_PA_UART3,
- .end = S5P_PA_UART3 + S5P_SZ_UART,
+ .end = S5P_PA_UART3 + S5P_SZ_UART - 1,
.flags = IORESOURCE_MEM,
},
[1] = {
@@ -123,7 +123,7 @@ static struct resource s5p_uart4_resource[] = {
#if CONFIG_SERIAL_SAMSUNG_UARTS > 4
[0] = {
.start = S5P_PA_UART4,
- .end = S5P_PA_UART4 + S5P_SZ_UART,
+ .end = S5P_PA_UART4 + S5P_SZ_UART - 1,
.flags = IORESOURCE_MEM,
},
[1] = {
@@ -148,7 +148,7 @@ static struct resource s5p_uart5_resource[] = {
#if CONFIG_SERIAL_SAMSUNG_UARTS > 5
[0] = {
.start = S5P_PA_UART5,
- .end = S5P_PA_UART5 + S5P_SZ_UART,
+ .end = S5P_PA_UART5 + S5P_SZ_UART - 1,
.flags = IORESOURCE_MEM,
},
[1] = {
diff --git a/trunk/arch/arm/plat-samsung/dev-ts.c b/trunk/arch/arm/plat-samsung/dev-ts.c
index 236ef8427d7d..3e4bd8147bf4 100644
--- a/trunk/arch/arm/plat-samsung/dev-ts.c
+++ b/trunk/arch/arm/plat-samsung/dev-ts.c
@@ -58,4 +58,3 @@ void __init s3c24xx_ts_set_platdata(struct s3c2410_ts_mach_info *pd)
s3c_device_ts.dev.platform_data = npd;
}
-EXPORT_SYMBOL(s3c24xx_ts_set_platdata);
diff --git a/trunk/arch/arm/plat-samsung/dev-uart.c b/trunk/arch/arm/plat-samsung/dev-uart.c
index 3776cd952450..5928105490fa 100644
--- a/trunk/arch/arm/plat-samsung/dev-uart.c
+++ b/trunk/arch/arm/plat-samsung/dev-uart.c
@@ -15,6 +15,8 @@
#include
#include
+#include
+
/* uart devices */
static struct platform_device s3c24xx_uart_device0 = {
diff --git a/trunk/arch/arm/plat-spear/include/plat/uncompress.h b/trunk/arch/arm/plat-spear/include/plat/uncompress.h
index 99ba6789cc97..6dd455bafdfd 100644
--- a/trunk/arch/arm/plat-spear/include/plat/uncompress.h
+++ b/trunk/arch/arm/plat-spear/include/plat/uncompress.h
@@ -24,10 +24,10 @@ static inline void putc(int c)
{
void __iomem *base = (void __iomem *)SPEAR_DBG_UART_BASE;
- while (readl(base + UART01x_FR) & UART01x_FR_TXFF)
+ while (readl_relaxed(base + UART01x_FR) & UART01x_FR_TXFF)
barrier();
- writel(c, base + UART01x_DR);
+ writel_relaxed(c, base + UART01x_DR);
}
static inline void flush(void)
diff --git a/trunk/arch/arm/plat-spear/include/plat/vmalloc.h b/trunk/arch/arm/plat-spear/include/plat/vmalloc.h
index 09e9372aea21..8c8b24d07046 100644
--- a/trunk/arch/arm/plat-spear/include/plat/vmalloc.h
+++ b/trunk/arch/arm/plat-spear/include/plat/vmalloc.h
@@ -14,6 +14,6 @@
#ifndef __PLAT_VMALLOC_H
#define __PLAT_VMALLOC_H
-#define VMALLOC_END 0xF0000000
+#define VMALLOC_END 0xF0000000UL
#endif /* __PLAT_VMALLOC_H */
diff --git a/trunk/arch/blackfin/kernel/time.c b/trunk/arch/blackfin/kernel/time.c
index c9113619029f..8d73724c0092 100644
--- a/trunk/arch/blackfin/kernel/time.c
+++ b/trunk/arch/blackfin/kernel/time.c
@@ -114,16 +114,14 @@ u32 arch_gettimeoffset(void)
/*
* timer_interrupt() needs to keep up the real-time clock,
- * as well as call the "do_timer()" routine every clocktick
+ * as well as call the "xtime_update()" routine every clocktick
*/
#ifdef CONFIG_CORE_TIMER_IRQ_L1
__attribute__((l1_text))
#endif
irqreturn_t timer_interrupt(int irq, void *dummy)
{
- write_seqlock(&xtime_lock);
- do_timer(1);
- write_sequnlock(&xtime_lock);
+ xtime_update(1);
#ifdef CONFIG_IPIPE
update_root_process_times(get_irq_regs());
diff --git a/trunk/arch/blackfin/kernel/vmlinux.lds.S b/trunk/arch/blackfin/kernel/vmlinux.lds.S
index 4122678529c0..c40d07f708e8 100644
--- a/trunk/arch/blackfin/kernel/vmlinux.lds.S
+++ b/trunk/arch/blackfin/kernel/vmlinux.lds.S
@@ -136,7 +136,7 @@ SECTIONS
. = ALIGN(16);
INIT_DATA_SECTION(16)
- PERCPU(4)
+ PERCPU(32, 4)
.exit.data :
{
diff --git a/trunk/arch/blackfin/lib/outs.S b/trunk/arch/blackfin/lib/outs.S
index 250f4d4b9436..06a5e674401f 100644
--- a/trunk/arch/blackfin/lib/outs.S
+++ b/trunk/arch/blackfin/lib/outs.S
@@ -13,6 +13,8 @@
.align 2
ENTRY(_outsl)
+ CC = R2 == 0;
+ IF CC JUMP 1f;
P0 = R0; /* P0 = port */
P1 = R1; /* P1 = address */
P2 = R2; /* P2 = count */
@@ -20,10 +22,12 @@ ENTRY(_outsl)
LSETUP( .Llong_loop_s, .Llong_loop_e) LC0 = P2;
.Llong_loop_s: R0 = [P1++];
.Llong_loop_e: [P0] = R0;
- RTS;
+1: RTS;
ENDPROC(_outsl)
ENTRY(_outsw)
+ CC = R2 == 0;
+ IF CC JUMP 1f;
P0 = R0; /* P0 = port */
P1 = R1; /* P1 = address */
P2 = R2; /* P2 = count */
@@ -31,10 +35,12 @@ ENTRY(_outsw)
LSETUP( .Lword_loop_s, .Lword_loop_e) LC0 = P2;
.Lword_loop_s: R0 = W[P1++];
.Lword_loop_e: W[P0] = R0;
- RTS;
+1: RTS;
ENDPROC(_outsw)
ENTRY(_outsb)
+ CC = R2 == 0;
+ IF CC JUMP 1f;
P0 = R0; /* P0 = port */
P1 = R1; /* P1 = address */
P2 = R2; /* P2 = count */
@@ -42,10 +48,12 @@ ENTRY(_outsb)
LSETUP( .Lbyte_loop_s, .Lbyte_loop_e) LC0 = P2;
.Lbyte_loop_s: R0 = B[P1++];
.Lbyte_loop_e: B[P0] = R0;
- RTS;
+1: RTS;
ENDPROC(_outsb)
ENTRY(_outsw_8)
+ CC = R2 == 0;
+ IF CC JUMP 1f;
P0 = R0; /* P0 = port */
P1 = R1; /* P1 = address */
P2 = R2; /* P2 = count */
@@ -56,5 +64,5 @@ ENTRY(_outsw_8)
R0 = R0 << 8;
R0 = R0 + R1;
.Lword8_loop_e: W[P0] = R0;
- RTS;
+1: RTS;
ENDPROC(_outsw_8)
diff --git a/trunk/arch/blackfin/mach-common/cache.S b/trunk/arch/blackfin/mach-common/cache.S
index 790c767ca95a..ab4a925a443e 100644
--- a/trunk/arch/blackfin/mach-common/cache.S
+++ b/trunk/arch/blackfin/mach-common/cache.S
@@ -58,6 +58,8 @@
1:
.ifeqs "\flushins", BROK_FLUSH_INST
\flushins [P0++];
+ nop;
+ nop;
2: nop;
.else
2: \flushins [P0++];
diff --git a/trunk/arch/cris/arch-v10/kernel/time.c b/trunk/arch/cris/arch-v10/kernel/time.c
index 00eb36f8debf..20c85b5dc7d0 100644
--- a/trunk/arch/cris/arch-v10/kernel/time.c
+++ b/trunk/arch/cris/arch-v10/kernel/time.c
@@ -140,7 +140,7 @@ stop_watchdog(void)
/*
* timer_interrupt() needs to keep up the real-time clock,
- * as well as call the "do_timer()" routine every clocktick
+ * as well as call the "xtime_update()" routine every clocktick
*/
//static unsigned short myjiff; /* used by our debug routine print_timestamp */
@@ -176,7 +176,7 @@ timer_interrupt(int irq, void *dev_id)
/* call the real timer interrupt handler */
- do_timer(1);
+ xtime_update(1);
cris_do_profile(regs); /* Save profiling information */
return IRQ_HANDLED;
diff --git a/trunk/arch/cris/arch-v32/kernel/smp.c b/trunk/arch/cris/arch-v32/kernel/smp.c
index 84fed3b4b079..4c9e3e1ba5d1 100644
--- a/trunk/arch/cris/arch-v32/kernel/smp.c
+++ b/trunk/arch/cris/arch-v32/kernel/smp.c
@@ -26,7 +26,9 @@
#define FLUSH_ALL (void*)0xffffffff
/* Vector of locks used for various atomic operations */
-spinlock_t cris_atomic_locks[] = { [0 ... LOCK_COUNT - 1] = SPIN_LOCK_UNLOCKED};
+spinlock_t cris_atomic_locks[] = {
+ [0 ... LOCK_COUNT - 1] = __SPIN_LOCK_UNLOCKED(cris_atomic_locks)
+};
/* CPU masks */
cpumask_t phys_cpu_present_map = CPU_MASK_NONE;
diff --git a/trunk/arch/cris/arch-v32/kernel/time.c b/trunk/arch/cris/arch-v32/kernel/time.c
index a545211e999d..bb978ede8985 100644
--- a/trunk/arch/cris/arch-v32/kernel/time.c
+++ b/trunk/arch/cris/arch-v32/kernel/time.c
@@ -183,7 +183,7 @@ void handle_watchdog_bite(struct pt_regs *regs)
/*
* timer_interrupt() needs to keep up the real-time clock,
- * as well as call the "do_timer()" routine every clocktick.
+ * as well as call the "xtime_update()" routine every clocktick.
*/
extern void cris_do_profile(struct pt_regs *regs);
@@ -216,9 +216,7 @@ static inline irqreturn_t timer_interrupt(int irq, void *dev_id)
return IRQ_HANDLED;
/* Call the real timer interrupt handler */
- write_seqlock(&xtime_lock);
- do_timer(1);
- write_sequnlock(&xtime_lock);
+ xtime_update(1);
return IRQ_HANDLED;
}
diff --git a/trunk/arch/cris/kernel/vmlinux.lds.S b/trunk/arch/cris/kernel/vmlinux.lds.S
index 442218980db0..728bbd9e7d4c 100644
--- a/trunk/arch/cris/kernel/vmlinux.lds.S
+++ b/trunk/arch/cris/kernel/vmlinux.lds.S
@@ -72,11 +72,6 @@ SECTIONS
INIT_TEXT_SECTION(PAGE_SIZE)
.init.data : { INIT_DATA }
.init.setup : { INIT_SETUP(16) }
-#ifdef CONFIG_ETRAX_ARCH_V32
- __start___param = .;
- __param : { *(__param) }
- __stop___param = .;
-#endif
.initcall.init : {
INIT_CALLS
}
@@ -107,7 +102,7 @@ SECTIONS
#endif
__vmlinux_end = .; /* Last address of the physical file. */
#ifdef CONFIG_ETRAX_ARCH_V32
- PERCPU(PAGE_SIZE)
+ PERCPU(32, PAGE_SIZE)
.init.ramfs : {
INIT_RAM_FS
diff --git a/trunk/arch/frv/include/asm/futex.h b/trunk/arch/frv/include/asm/futex.h
index 08b3d1da3583..4bea27f50a7a 100644
--- a/trunk/arch/frv/include/asm/futex.h
+++ b/trunk/arch/frv/include/asm/futex.h
@@ -7,10 +7,11 @@
#include
#include
-extern int futex_atomic_op_inuser(int encoded_op, int __user *uaddr);
+extern int futex_atomic_op_inuser(int encoded_op, u32 __user *uaddr);
static inline int
-futex_atomic_cmpxchg_inatomic(int __user *uaddr, int oldval, int newval)
+futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
+ u32 oldval, u32 newval)
{
return -ENOSYS;
}
diff --git a/trunk/arch/frv/kernel/futex.c b/trunk/arch/frv/kernel/futex.c
index 14f64b054c7e..d155ca9e5098 100644
--- a/trunk/arch/frv/kernel/futex.c
+++ b/trunk/arch/frv/kernel/futex.c
@@ -18,7 +18,7 @@
* the various futex operations; MMU fault checking is ignored under no-MMU
* conditions
*/
-static inline int atomic_futex_op_xchg_set(int oparg, int __user *uaddr, int *_oldval)
+static inline int atomic_futex_op_xchg_set(int oparg, u32 __user *uaddr, int *_oldval)
{
int oldval, ret;
@@ -50,7 +50,7 @@ static inline int atomic_futex_op_xchg_set(int oparg, int __user *uaddr, int *_o
return ret;
}
-static inline int atomic_futex_op_xchg_add(int oparg, int __user *uaddr, int *_oldval)
+static inline int atomic_futex_op_xchg_add(int oparg, u32 __user *uaddr, int *_oldval)
{
int oldval, ret;
@@ -83,7 +83,7 @@ static inline int atomic_futex_op_xchg_add(int oparg, int __user *uaddr, int *_o
return ret;
}
-static inline int atomic_futex_op_xchg_or(int oparg, int __user *uaddr, int *_oldval)
+static inline int atomic_futex_op_xchg_or(int oparg, u32 __user *uaddr, int *_oldval)
{
int oldval, ret;
@@ -116,7 +116,7 @@ static inline int atomic_futex_op_xchg_or(int oparg, int __user *uaddr, int *_ol
return ret;
}
-static inline int atomic_futex_op_xchg_and(int oparg, int __user *uaddr, int *_oldval)
+static inline int atomic_futex_op_xchg_and(int oparg, u32 __user *uaddr, int *_oldval)
{
int oldval, ret;
@@ -149,7 +149,7 @@ static inline int atomic_futex_op_xchg_and(int oparg, int __user *uaddr, int *_o
return ret;
}
-static inline int atomic_futex_op_xchg_xor(int oparg, int __user *uaddr, int *_oldval)
+static inline int atomic_futex_op_xchg_xor(int oparg, u32 __user *uaddr, int *_oldval)
{
int oldval, ret;
@@ -186,7 +186,7 @@ static inline int atomic_futex_op_xchg_xor(int oparg, int __user *uaddr, int *_o
/*
* do the futex operations
*/
-int futex_atomic_op_inuser(int encoded_op, int __user *uaddr)
+int futex_atomic_op_inuser(int encoded_op, u32 __user *uaddr)
{
int op = (encoded_op >> 28) & 7;
int cmp = (encoded_op >> 24) & 15;
@@ -197,7 +197,7 @@ int futex_atomic_op_inuser(int encoded_op, int __user *uaddr)
if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28))
oparg = 1 << oparg;
- if (!access_ok(VERIFY_WRITE, uaddr, sizeof(int)))
+ if (!access_ok(VERIFY_WRITE, uaddr, sizeof(u32)))
return -EFAULT;
pagefault_disable();
diff --git a/trunk/arch/frv/kernel/time.c b/trunk/arch/frv/kernel/time.c
index 0ddbbae83cb2..b457de496b70 100644
--- a/trunk/arch/frv/kernel/time.c
+++ b/trunk/arch/frv/kernel/time.c
@@ -50,21 +50,13 @@ static struct irqaction timer_irq = {
/*
* timer_interrupt() needs to keep up the real-time clock,
- * as well as call the "do_timer()" routine every clocktick
+ * as well as call the "xtime_update()" routine every clocktick
*/
static irqreturn_t timer_interrupt(int irq, void *dummy)
{
profile_tick(CPU_PROFILING);
- /*
- * Here we are in the timer irq handler. We just have irqs locally
- * disabled but we don't know if the timer_bh is running on the other
- * CPU. We need to avoid to SMP race with it. NOTE: we don't need
- * the irq version of write_lock because as just said we have irq
- * locally disabled. -arca
- */
- write_seqlock(&xtime_lock);
- do_timer(1);
+ xtime_update(1);
#ifdef CONFIG_HEARTBEAT
static unsigned short n;
@@ -72,8 +64,6 @@ static irqreturn_t timer_interrupt(int irq, void *dummy)
__set_LEDS(n);
#endif /* CONFIG_HEARTBEAT */
- write_sequnlock(&xtime_lock);
-
update_process_times(user_mode(get_irq_regs()));
return IRQ_HANDLED;
diff --git a/trunk/arch/frv/kernel/vmlinux.lds.S b/trunk/arch/frv/kernel/vmlinux.lds.S
index 8b973f3cc90e..0daae8af5787 100644
--- a/trunk/arch/frv/kernel/vmlinux.lds.S
+++ b/trunk/arch/frv/kernel/vmlinux.lds.S
@@ -37,7 +37,7 @@ SECTIONS
_einittext = .;
INIT_DATA_SECTION(8)
- PERCPU(4096)
+ PERCPU(L1_CACHE_BYTES, 4096)
. = ALIGN(PAGE_SIZE);
__init_end = .;
diff --git a/trunk/arch/h8300/kernel/time.c b/trunk/arch/h8300/kernel/time.c
index 165005aff9df..32263a138aa6 100644
--- a/trunk/arch/h8300/kernel/time.c
+++ b/trunk/arch/h8300/kernel/time.c
@@ -35,9 +35,7 @@ void h8300_timer_tick(void)
{
if (current->pid)
profile_tick(CPU_PROFILING);
- write_seqlock(&xtime_lock);
- do_timer(1);
- write_sequnlock(&xtime_lock);
+ xtime_update(1);
update_process_times(user_mode(get_irq_regs()));
}
diff --git a/trunk/arch/h8300/kernel/timer/timer8.c b/trunk/arch/h8300/kernel/timer/timer8.c
index 3946c0fa8374..7a1533fad47d 100644
--- a/trunk/arch/h8300/kernel/timer/timer8.c
+++ b/trunk/arch/h8300/kernel/timer/timer8.c
@@ -61,7 +61,7 @@
/*
* timer_interrupt() needs to keep up the real-time clock,
- * as well as call the "do_timer()" routine every clocktick
+ * as well as call the "xtime_update()" routine every clocktick
*/
static irqreturn_t timer_interrupt(int irq, void *dev_id)
diff --git a/trunk/arch/ia64/include/asm/futex.h b/trunk/arch/ia64/include/asm/futex.h
index c7f0f062239c..8428525ddb22 100644
--- a/trunk/arch/ia64/include/asm/futex.h
+++ b/trunk/arch/ia64/include/asm/futex.h
@@ -46,7 +46,7 @@ do { \
} while (0)
static inline int
-futex_atomic_op_inuser (int encoded_op, int __user *uaddr)
+futex_atomic_op_inuser (int encoded_op, u32 __user *uaddr)
{
int op = (encoded_op >> 28) & 7;
int cmp = (encoded_op >> 24) & 15;
@@ -56,7 +56,7 @@ futex_atomic_op_inuser (int encoded_op, int __user *uaddr)
if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28))
oparg = 1 << oparg;
- if (! access_ok (VERIFY_WRITE, uaddr, sizeof(int)))
+ if (! access_ok (VERIFY_WRITE, uaddr, sizeof(u32)))
return -EFAULT;
pagefault_disable();
@@ -100,23 +100,26 @@ futex_atomic_op_inuser (int encoded_op, int __user *uaddr)
}
static inline int
-futex_atomic_cmpxchg_inatomic(int __user *uaddr, int oldval, int newval)
+futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
+ u32 oldval, u32 newval)
{
- if (!access_ok(VERIFY_WRITE, uaddr, sizeof(int)))
+ if (!access_ok(VERIFY_WRITE, uaddr, sizeof(u32)))
return -EFAULT;
{
- register unsigned long r8 __asm ("r8");
+ register unsigned long r8 __asm ("r8") = 0;
+ unsigned long prev;
__asm__ __volatile__(
" mf;; \n"
" mov ar.ccv=%3;; \n"
"[1:] cmpxchg4.acq %0=[%1],%2,ar.ccv \n"
" .xdata4 \"__ex_table\", 1b-., 2f-. \n"
"[2:]"
- : "=r" (r8)
+ : "=r" (prev)
: "r" (uaddr), "r" (newval),
"rO" ((long) (unsigned) oldval)
: "memory");
+ *uval = prev;
return r8;
}
}
diff --git a/trunk/arch/ia64/include/asm/rwsem.h b/trunk/arch/ia64/include/asm/rwsem.h
index 215d5454c7d3..3027e7516d85 100644
--- a/trunk/arch/ia64/include/asm/rwsem.h
+++ b/trunk/arch/ia64/include/asm/rwsem.h
@@ -25,20 +25,8 @@
#error "Please don't include directly, use instead."
#endif
-#include
-#include
-
#include
-/*
- * the semaphore definition
- */
-struct rw_semaphore {
- signed long count;
- spinlock_t wait_lock;
- struct list_head wait_list;
-};
-
#define RWSEM_UNLOCKED_VALUE __IA64_UL_CONST(0x0000000000000000)
#define RWSEM_ACTIVE_BIAS (1L)
#define RWSEM_ACTIVE_MASK (0xffffffffL)
@@ -46,26 +34,6 @@ struct rw_semaphore {
#define RWSEM_ACTIVE_READ_BIAS RWSEM_ACTIVE_BIAS
#define RWSEM_ACTIVE_WRITE_BIAS (RWSEM_WAITING_BIAS + RWSEM_ACTIVE_BIAS)
-#define __RWSEM_INITIALIZER(name) \
- { RWSEM_UNLOCKED_VALUE, __SPIN_LOCK_UNLOCKED((name).wait_lock), \
- LIST_HEAD_INIT((name).wait_list) }
-
-#define DECLARE_RWSEM(name) \
- struct rw_semaphore name = __RWSEM_INITIALIZER(name)
-
-extern struct rw_semaphore *rwsem_down_read_failed(struct rw_semaphore *sem);
-extern struct rw_semaphore *rwsem_down_write_failed(struct rw_semaphore *sem);
-extern struct rw_semaphore *rwsem_wake(struct rw_semaphore *sem);
-extern struct rw_semaphore *rwsem_downgrade_wake(struct rw_semaphore *sem);
-
-static inline void
-init_rwsem (struct rw_semaphore *sem)
-{
- sem->count = RWSEM_UNLOCKED_VALUE;
- spin_lock_init(&sem->wait_lock);
- INIT_LIST_HEAD(&sem->wait_list);
-}
-
/*
* lock for reading
*/
@@ -174,9 +142,4 @@ __downgrade_write (struct rw_semaphore *sem)
#define rwsem_atomic_add(delta, sem) atomic64_add(delta, (atomic64_t *)(&(sem)->count))
#define rwsem_atomic_update(delta, sem) atomic64_add_return(delta, (atomic64_t *)(&(sem)->count))
-static inline int rwsem_is_locked(struct rw_semaphore *sem)
-{
- return (sem->count != 0);
-}
-
#endif /* _ASM_IA64_RWSEM_H */
diff --git a/trunk/arch/ia64/include/asm/xen/hypercall.h b/trunk/arch/ia64/include/asm/xen/hypercall.h
index 96fc62366aa4..ed28bcd5bb85 100644
--- a/trunk/arch/ia64/include/asm/xen/hypercall.h
+++ b/trunk/arch/ia64/include/asm/xen/hypercall.h
@@ -107,7 +107,7 @@ extern unsigned long __hypercall(unsigned long a1, unsigned long a2,
static inline int
xencomm_arch_hypercall_sched_op(int cmd, struct xencomm_handle *arg)
{
- return _hypercall2(int, sched_op_new, cmd, arg);
+ return _hypercall2(int, sched_op, cmd, arg);
}
static inline long
diff --git a/trunk/arch/ia64/kernel/time.c b/trunk/arch/ia64/kernel/time.c
index 9702fa92489e..156ad803d5b7 100644
--- a/trunk/arch/ia64/kernel/time.c
+++ b/trunk/arch/ia64/kernel/time.c
@@ -190,19 +190,10 @@ timer_interrupt (int irq, void *dev_id)
new_itm += local_cpu_data->itm_delta;
- if (smp_processor_id() == time_keeper_id) {
- /*
- * Here we are in the timer irq handler. We have irqs locally
- * disabled, but we don't know if the timer_bh is running on
- * another CPU. We need to avoid to SMP race by acquiring the
- * xtime_lock.
- */
- write_seqlock(&xtime_lock);
- do_timer(1);
- local_cpu_data->itm_next = new_itm;
- write_sequnlock(&xtime_lock);
- } else
- local_cpu_data->itm_next = new_itm;
+ if (smp_processor_id() == time_keeper_id)
+ xtime_update(1);
+
+ local_cpu_data->itm_next = new_itm;
if (time_after(new_itm, ia64_get_itc()))
break;
@@ -222,7 +213,7 @@ timer_interrupt (int irq, void *dev_id)
* comfort, we increase the safety margin by
* intentionally dropping the next tick(s). We do NOT
* update itm.next because that would force us to call
- * do_timer() which in turn would let our clock run
+ * xtime_update() which in turn would let our clock run
* too fast (with the potentially devastating effect
* of losing monotony of time).
*/
diff --git a/trunk/arch/ia64/kernel/vmlinux.lds.S b/trunk/arch/ia64/kernel/vmlinux.lds.S
index 5a4d044dcb1c..787de4a77d82 100644
--- a/trunk/arch/ia64/kernel/vmlinux.lds.S
+++ b/trunk/arch/ia64/kernel/vmlinux.lds.S
@@ -198,7 +198,7 @@ SECTIONS {
/* Per-cpu data: */
. = ALIGN(PERCPU_PAGE_SIZE);
- PERCPU_VADDR(PERCPU_ADDR, :percpu)
+ PERCPU_VADDR(SMP_CACHE_BYTES, PERCPU_ADDR, :percpu)
__phys_per_cpu_start = __per_cpu_load;
/*
* ensure percpu data fits
diff --git a/trunk/arch/ia64/xen/suspend.c b/trunk/arch/ia64/xen/suspend.c
index fd66b048c6fa..419c8620945a 100644
--- a/trunk/arch/ia64/xen/suspend.c
+++ b/trunk/arch/ia64/xen/suspend.c
@@ -37,19 +37,14 @@ xen_mm_unpin_all(void)
/* nothing */
}
-void xen_pre_device_suspend(void)
-{
- /* nothing */
-}
-
void
-xen_pre_suspend()
+xen_arch_pre_suspend()
{
/* nothing */
}
void
-xen_post_suspend(int suspend_cancelled)
+xen_arch_post_suspend(int suspend_cancelled)
{
if (suspend_cancelled)
return;
diff --git a/trunk/arch/ia64/xen/time.c b/trunk/arch/ia64/xen/time.c
index c1c544513e8d..1f8244a78bee 100644
--- a/trunk/arch/ia64/xen/time.c
+++ b/trunk/arch/ia64/xen/time.c
@@ -139,14 +139,11 @@ consider_steal_time(unsigned long new_itm)
run_posix_cpu_timers(p);
delta_itm += local_cpu_data->itm_delta * (stolen + blocked);
- if (cpu == time_keeper_id) {
- write_seqlock(&xtime_lock);
- do_timer(stolen + blocked);
- local_cpu_data->itm_next = delta_itm + new_itm;
- write_sequnlock(&xtime_lock);
- } else {
- local_cpu_data->itm_next = delta_itm + new_itm;
- }
+ if (cpu == time_keeper_id)
+ xtime_update(stolen + blocked);
+
+ local_cpu_data->itm_next = delta_itm + new_itm;
+
per_cpu(xen_stolen_time, cpu) += NS_PER_TICK * stolen;
per_cpu(xen_blocked_time, cpu) += NS_PER_TICK * blocked;
}
diff --git a/trunk/arch/m32r/kernel/time.c b/trunk/arch/m32r/kernel/time.c
index bda86820bffd..84dd04048db9 100644
--- a/trunk/arch/m32r/kernel/time.c
+++ b/trunk/arch/m32r/kernel/time.c
@@ -107,15 +107,14 @@ u32 arch_gettimeoffset(void)
/*
* timer_interrupt() needs to keep up the real-time clock,
- * as well as call the "do_timer()" routine every clocktick
+ * as well as call the "xtime_update()" routine every clocktick
*/
static irqreturn_t timer_interrupt(int irq, void *dev_id)
{
#ifndef CONFIG_SMP
profile_tick(CPU_PROFILING);
#endif
- /* XXX FIXME. Uh, the xtime_lock should be held here, no? */
- do_timer(1);
+ xtime_update(1);
#ifndef CONFIG_SMP
update_process_times(user_mode(get_irq_regs()));
diff --git a/trunk/arch/m32r/kernel/vmlinux.lds.S b/trunk/arch/m32r/kernel/vmlinux.lds.S
index 7da94eaa082b..c194d64cdbb9 100644
--- a/trunk/arch/m32r/kernel/vmlinux.lds.S
+++ b/trunk/arch/m32r/kernel/vmlinux.lds.S
@@ -53,7 +53,7 @@ SECTIONS
__init_begin = .;
INIT_TEXT_SECTION(PAGE_SIZE)
INIT_DATA_SECTION(16)
- PERCPU(PAGE_SIZE)
+ PERCPU(32, PAGE_SIZE)
. = ALIGN(PAGE_SIZE);
__init_end = .;
/* freed after init ends here */
diff --git a/trunk/arch/m68k/bvme6000/config.c b/trunk/arch/m68k/bvme6000/config.c
index 9fe6fefb5e14..1edd95095cb4 100644
--- a/trunk/arch/m68k/bvme6000/config.c
+++ b/trunk/arch/m68k/bvme6000/config.c
@@ -45,8 +45,8 @@ extern int bvme6000_set_clock_mmss (unsigned long);
extern void bvme6000_reset (void);
void bvme6000_set_vectors (void);
-/* Save tick handler routine pointer, will point to do_timer() in
- * kernel/sched.c, called via bvme6000_process_int() */
+/* Save tick handler routine pointer, will point to xtime_update() in
+ * kernel/timer/timekeeping.c, called via bvme6000_process_int() */
static irq_handler_t tick_handler;
diff --git a/trunk/arch/m68k/kernel/time.c b/trunk/arch/m68k/kernel/time.c
index 06438dac08ff..18b34ee5db3b 100644
--- a/trunk/arch/m68k/kernel/time.c
+++ b/trunk/arch/m68k/kernel/time.c
@@ -37,11 +37,11 @@ static inline int set_rtc_mmss(unsigned long nowtime)
/*
* timer_interrupt() needs to keep up the real-time clock,
- * as well as call the "do_timer()" routine every clocktick
+ * as well as call the "xtime_update()" routine every clocktick
*/
static irqreturn_t timer_interrupt(int irq, void *dummy)
{
- do_timer(1);
+ xtime_update(1);
update_process_times(user_mode(get_irq_regs()));
profile_tick(CPU_PROFILING);
diff --git a/trunk/arch/m68k/mvme147/config.c b/trunk/arch/m68k/mvme147/config.c
index 100baaa692a1..6cb9c3a9b6c9 100644
--- a/trunk/arch/m68k/mvme147/config.c
+++ b/trunk/arch/m68k/mvme147/config.c
@@ -46,8 +46,8 @@ extern void mvme147_reset (void);
static int bcd2int (unsigned char b);
-/* Save tick handler routine pointer, will point to do_timer() in
- * kernel/sched.c, called via mvme147_process_int() */
+/* Save tick handler routine pointer, will point to xtime_update() in
+ * kernel/time/timekeeping.c, called via mvme147_process_int() */
irq_handler_t tick_handler;
diff --git a/trunk/arch/m68k/mvme16x/config.c b/trunk/arch/m68k/mvme16x/config.c
index 11edf61cc2c4..0b28e2621653 100644
--- a/trunk/arch/m68k/mvme16x/config.c
+++ b/trunk/arch/m68k/mvme16x/config.c
@@ -51,8 +51,8 @@ extern void mvme16x_reset (void);
int bcd2int (unsigned char b);
-/* Save tick handler routine pointer, will point to do_timer() in
- * kernel/sched.c, called via mvme16x_process_int() */
+/* Save tick handler routine pointer, will point to xtime_update() in
+ * kernel/time/timekeeping.c, called via mvme16x_process_int() */
static irq_handler_t tick_handler;
diff --git a/trunk/arch/m68k/sun3/sun3ints.c b/trunk/arch/m68k/sun3/sun3ints.c
index 2d9e21bd313a..6464ad3ae3e6 100644
--- a/trunk/arch/m68k/sun3/sun3ints.c
+++ b/trunk/arch/m68k/sun3/sun3ints.c
@@ -66,7 +66,7 @@ static irqreturn_t sun3_int5(int irq, void *dev_id)
#ifdef CONFIG_SUN3
intersil_clear();
#endif
- do_timer(1);
+ xtime_update(1);
update_process_times(user_mode(get_irq_regs()));
if (!(kstat_cpu(0).irqs[irq] % 20))
sun3_leds(led_pattern[(kstat_cpu(0).irqs[irq] % 160) / 20]);
diff --git a/trunk/arch/m68knommu/kernel/time.c b/trunk/arch/m68knommu/kernel/time.c
index d6ac2a43453c..6623909f70e6 100644
--- a/trunk/arch/m68knommu/kernel/time.c
+++ b/trunk/arch/m68knommu/kernel/time.c
@@ -36,7 +36,7 @@ static inline int set_rtc_mmss(unsigned long nowtime)
#ifndef CONFIG_GENERIC_CLOCKEVENTS
/*
* timer_interrupt() needs to keep up the real-time clock,
- * as well as call the "do_timer()" routine every clocktick
+ * as well as call the "xtime_update()" routine every clocktick
*/
irqreturn_t arch_timer_interrupt(int irq, void *dummy)
{
@@ -44,11 +44,7 @@ irqreturn_t arch_timer_interrupt(int irq, void *dummy)
if (current->pid)
profile_tick(CPU_PROFILING);
- write_seqlock(&xtime_lock);
-
- do_timer(1);
-
- write_sequnlock(&xtime_lock);
+ xtime_update(1);
update_process_times(user_mode(get_irq_regs()));
diff --git a/trunk/arch/microblaze/include/asm/futex.h b/trunk/arch/microblaze/include/asm/futex.h
index ad3fd61b2fe7..b0526d2716fa 100644
--- a/trunk/arch/microblaze/include/asm/futex.h
+++ b/trunk/arch/microblaze/include/asm/futex.h
@@ -29,7 +29,7 @@
})
static inline int
-futex_atomic_op_inuser(int encoded_op, int __user *uaddr)
+futex_atomic_op_inuser(int encoded_op, u32 __user *uaddr)
{
int op = (encoded_op >> 28) & 7;
int cmp = (encoded_op >> 24) & 15;
@@ -39,7 +39,7 @@ futex_atomic_op_inuser(int encoded_op, int __user *uaddr)
if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28))
oparg = 1 << oparg;
- if (!access_ok(VERIFY_WRITE, uaddr, sizeof(int)))
+ if (!access_ok(VERIFY_WRITE, uaddr, sizeof(u32)))
return -EFAULT;
pagefault_disable();
@@ -94,31 +94,34 @@ futex_atomic_op_inuser(int encoded_op, int __user *uaddr)
}
static inline int
-futex_atomic_cmpxchg_inatomic(int __user *uaddr, int oldval, int newval)
+futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
+ u32 oldval, u32 newval)
{
- int prev, cmp;
+ int ret = 0, cmp;
+ u32 prev;
- if (!access_ok(VERIFY_WRITE, uaddr, sizeof(int)))
+ if (!access_ok(VERIFY_WRITE, uaddr, sizeof(u32)))
return -EFAULT;
- __asm__ __volatile__ ("1: lwx %0, %2, r0; \
- cmp %1, %0, %3; \
- beqi %1, 3f; \
- 2: swx %4, %2, r0; \
- addic %1, r0, 0; \
- bnei %1, 1b; \
+ __asm__ __volatile__ ("1: lwx %1, %3, r0; \
+ cmp %2, %1, %4; \
+ beqi %2, 3f; \
+ 2: swx %5, %3, r0; \
+ addic %2, r0, 0; \
+ bnei %2, 1b; \
3: \
.section .fixup,\"ax\"; \
4: brid 3b; \
- addik %0, r0, %5; \
+ addik %0, r0, %6; \
.previous; \
.section __ex_table,\"a\"; \
.word 1b,4b,2b,4b; \
.previous;" \
- : "=&r" (prev), "=&r"(cmp) \
+ : "+r" (ret), "=&r" (prev), "=&r"(cmp) \
: "r" (uaddr), "r" (oldval), "r" (newval), "i" (-EFAULT));
- return prev;
+ *uval = prev;
+ return ret;
}
#endif /* __KERNEL__ */
diff --git a/trunk/arch/microblaze/include/asm/pci-bridge.h b/trunk/arch/microblaze/include/asm/pci-bridge.h
index 0c68764ab547..10717669e0c2 100644
--- a/trunk/arch/microblaze/include/asm/pci-bridge.h
+++ b/trunk/arch/microblaze/include/asm/pci-bridge.h
@@ -104,11 +104,22 @@ struct pci_controller {
int global_number; /* PCI domain number */
};
+#ifdef CONFIG_PCI
static inline struct pci_controller *pci_bus_to_host(const struct pci_bus *bus)
{
return bus->sysdata;
}
+static inline struct device_node *pci_bus_to_OF_node(struct pci_bus *bus)
+{
+ struct pci_controller *host;
+
+ if (bus->self)
+ return pci_device_to_OF_node(bus->self);
+ host = pci_bus_to_host(bus);
+ return host ? host->dn : NULL;
+}
+
static inline int isa_vaddr_is_ioport(void __iomem *address)
{
/* No specific ISA handling on ppc32 at this stage, it
@@ -116,6 +127,7 @@ static inline int isa_vaddr_is_ioport(void __iomem *address)
*/
return 0;
}
+#endif /* CONFIG_PCI */
/* These are used for config access before all the PCI probing
has been done. */
diff --git a/trunk/arch/microblaze/include/asm/prom.h b/trunk/arch/microblaze/include/asm/prom.h
index 2e72af078b05..d0890d36ef61 100644
--- a/trunk/arch/microblaze/include/asm/prom.h
+++ b/trunk/arch/microblaze/include/asm/prom.h
@@ -64,21 +64,6 @@ extern void kdump_move_device_tree(void);
/* CPU OF node matching */
struct device_node *of_get_cpu_node(int cpu, unsigned int *thread);
-/**
- * of_irq_map_pci - Resolve the interrupt for a PCI device
- * @pdev: the device whose interrupt is to be resolved
- * @out_irq: structure of_irq filled by this function
- *
- * This function resolves the PCI interrupt for a given PCI device. If a
- * device-node exists for a given pci_dev, it will use normal OF tree
- * walking. If not, it will implement standard swizzling and walk up the
- * PCI tree until an device-node is found, at which point it will finish
- * resolving using the OF tree walking.
- */
-struct pci_dev;
-struct of_irq;
-extern int of_irq_map_pci(struct pci_dev *pdev, struct of_irq *out_irq);
-
#endif /* __ASSEMBLY__ */
#endif /* __KERNEL__ */
diff --git a/trunk/arch/microblaze/kernel/prom_parse.c b/trunk/arch/microblaze/kernel/prom_parse.c
index 9ae24f4b882b..47187cc2cf00 100644
--- a/trunk/arch/microblaze/kernel/prom_parse.c
+++ b/trunk/arch/microblaze/kernel/prom_parse.c
@@ -2,88 +2,11 @@
#include
#include
-#include
#include
#include
#include
#include
#include
-#include
-
-#ifdef CONFIG_PCI
-int of_irq_map_pci(struct pci_dev *pdev, struct of_irq *out_irq)
-{
- struct device_node *dn, *ppnode;
- struct pci_dev *ppdev;
- u32 lspec;
- u32 laddr[3];
- u8 pin;
- int rc;
-
- /* Check if we have a device node, if yes, fallback to standard OF
- * parsing
- */
- dn = pci_device_to_OF_node(pdev);
- if (dn)
- return of_irq_map_one(dn, 0, out_irq);
-
- /* Ok, we don't, time to have fun. Let's start by building up an
- * interrupt spec. we assume #interrupt-cells is 1, which is standard
- * for PCI. If you do different, then don't use that routine.
- */
- rc = pci_read_config_byte(pdev, PCI_INTERRUPT_PIN, &pin);
- if (rc != 0)
- return rc;
- /* No pin, exit */
- if (pin == 0)
- return -ENODEV;
-
- /* Now we walk up the PCI tree */
- lspec = pin;
- for (;;) {
- /* Get the pci_dev of our parent */
- ppdev = pdev->bus->self;
-
- /* Ouch, it's a host bridge... */
- if (ppdev == NULL) {
- struct pci_controller *host;
- host = pci_bus_to_host(pdev->bus);
- ppnode = host ? host->dn : NULL;
- /* No node for host bridge ? give up */
- if (ppnode == NULL)
- return -EINVAL;
- } else
- /* We found a P2P bridge, check if it has a node */
- ppnode = pci_device_to_OF_node(ppdev);
-
- /* Ok, we have found a parent with a device-node, hand over to
- * the OF parsing code.
- * We build a unit address from the linux device to be used for
- * resolution. Note that we use the linux bus number which may
- * not match your firmware bus numbering.
- * Fortunately, in most cases, interrupt-map-mask doesn't
- * include the bus number as part of the matching.
- * You should still be careful about that though if you intend
- * to rely on this function (you ship a firmware that doesn't
- * create device nodes for all PCI devices).
- */
- if (ppnode)
- break;
-
- /* We can only get here if we hit a P2P bridge with no node,
- * let's do standard swizzling and try again
- */
- lspec = pci_swizzle_interrupt_pin(pdev, lspec);
- pdev = ppdev;
- }
-
- laddr[0] = (pdev->bus->number << 16)
- | (pdev->devfn << 8);
- laddr[1] = laddr[2] = 0;
- return of_irq_map_raw(ppnode, &lspec, 1, laddr, out_irq);
-}
-EXPORT_SYMBOL_GPL(of_irq_map_pci);
-#endif /* CONFIG_PCI */
void of_parse_dma_window(struct device_node *dn, const void *dma_window_prop,
unsigned long *busno, unsigned long *phys, unsigned long *size)
diff --git a/trunk/arch/microblaze/pci/pci-common.c b/trunk/arch/microblaze/pci/pci-common.c
index e363615d6798..1e01a1253631 100644
--- a/trunk/arch/microblaze/pci/pci-common.c
+++ b/trunk/arch/microblaze/pci/pci-common.c
@@ -29,6 +29,7 @@
#include
#include
#include
+#include
#include
#include
diff --git a/trunk/arch/mips/Kconfig b/trunk/arch/mips/Kconfig
index f5ecc0566bc2..d88983516e26 100644
--- a/trunk/arch/mips/Kconfig
+++ b/trunk/arch/mips/Kconfig
@@ -4,6 +4,7 @@ config MIPS
select HAVE_GENERIC_DMA_COHERENT
select HAVE_IDE
select HAVE_OPROFILE
+ select HAVE_IRQ_WORK
select HAVE_PERF_EVENTS
select PERF_USE_VMALLOC
select HAVE_ARCH_KGDB
@@ -208,6 +209,7 @@ config MACH_JZ4740
select ARCH_REQUIRE_GPIOLIB
select SYS_HAS_EARLY_PRINTK
select HAVE_PWM
+ select HAVE_CLK
config LASAT
bool "LASAT Networks platforms"
@@ -333,6 +335,8 @@ config PNX8550_STB810
config PMC_MSP
bool "PMC-Sierra MSP chipsets"
depends on EXPERIMENTAL
+ select CEVT_R4K
+ select CSRC_R4K
select DMA_NONCOHERENT
select SWAP_IO_SPACE
select NO_EXCEPT_FILL
diff --git a/trunk/arch/mips/alchemy/mtx-1/board_setup.c b/trunk/arch/mips/alchemy/mtx-1/board_setup.c
index 6398fa95905c..40b84b991191 100644
--- a/trunk/arch/mips/alchemy/mtx-1/board_setup.c
+++ b/trunk/arch/mips/alchemy/mtx-1/board_setup.c
@@ -54,8 +54,8 @@ int mtx1_pci_idsel(unsigned int devsel, int assert);
static void mtx1_reset(char *c)
{
- /* Hit BCSR.SYSTEM_CONTROL[SW_RST] */
- au_writel(0x00000000, 0xAE00001C);
+ /* Jump to the reset vector */
+ __asm__ __volatile__("jr\t%0"::"r"(0xbfc00000));
}
static void mtx1_power_off(void)
diff --git a/trunk/arch/mips/alchemy/mtx-1/platform.c b/trunk/arch/mips/alchemy/mtx-1/platform.c
index e30e42add697..956f946218c5 100644
--- a/trunk/arch/mips/alchemy/mtx-1/platform.c
+++ b/trunk/arch/mips/alchemy/mtx-1/platform.c
@@ -28,6 +28,8 @@
#include
#include
+#include
+
static struct gpio_keys_button mtx1_gpio_button[] = {
{
.gpio = 207,
@@ -140,10 +142,17 @@ static struct __initdata platform_device * mtx1_devs[] = {
&mtx1_mtd,
};
+static struct au1000_eth_platform_data mtx1_au1000_eth0_pdata = {
+ .phy_search_highest_addr = 1,
+ .phy1_search_mac0 = 1,
+};
+
static int __init mtx1_register_devices(void)
{
int rc;
+ au1xxx_override_eth_cfg(0, &mtx1_au1000_eth0_pdata);
+
rc = gpio_request(mtx1_gpio_button[0].gpio,
mtx1_gpio_button[0].desc);
if (rc < 0) {
diff --git a/trunk/arch/mips/alchemy/xxs1500/board_setup.c b/trunk/arch/mips/alchemy/xxs1500/board_setup.c
index b43c918925d3..80c521e5290d 100644
--- a/trunk/arch/mips/alchemy/xxs1500/board_setup.c
+++ b/trunk/arch/mips/alchemy/xxs1500/board_setup.c
@@ -36,8 +36,8 @@
static void xxs1500_reset(char *c)
{
- /* Hit BCSR.SYSTEM_CONTROL[SW_RST] */
- au_writel(0x00000000, 0xAE00001C);
+ /* Jump to the reset vector */
+ __asm__ __volatile__("jr\t%0"::"r"(0xbfc00000));
}
static void xxs1500_power_off(void)
diff --git a/trunk/arch/mips/include/asm/futex.h b/trunk/arch/mips/include/asm/futex.h
index b9cce90346cf..6ebf1734b411 100644
--- a/trunk/arch/mips/include/asm/futex.h
+++ b/trunk/arch/mips/include/asm/futex.h
@@ -75,7 +75,7 @@
}
static inline int
-futex_atomic_op_inuser(int encoded_op, int __user *uaddr)
+futex_atomic_op_inuser(int encoded_op, u32 __user *uaddr)
{
int op = (encoded_op >> 28) & 7;
int cmp = (encoded_op >> 24) & 15;
@@ -85,7 +85,7 @@ futex_atomic_op_inuser(int encoded_op, int __user *uaddr)
if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28))
oparg = 1 << oparg;
- if (! access_ok (VERIFY_WRITE, uaddr, sizeof(int)))
+ if (! access_ok (VERIFY_WRITE, uaddr, sizeof(u32)))
return -EFAULT;
pagefault_disable();
@@ -132,11 +132,13 @@ futex_atomic_op_inuser(int encoded_op, int __user *uaddr)
}
static inline int
-futex_atomic_cmpxchg_inatomic(int __user *uaddr, int oldval, int newval)
+futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
+ u32 oldval, u32 newval)
{
- int retval;
+ int ret = 0;
+ u32 val;
- if (!access_ok(VERIFY_WRITE, uaddr, sizeof(int)))
+ if (!access_ok(VERIFY_WRITE, uaddr, sizeof(u32)))
return -EFAULT;
if (cpu_has_llsc && R10000_LLSC_WAR) {
@@ -145,25 +147,25 @@ futex_atomic_cmpxchg_inatomic(int __user *uaddr, int oldval, int newval)
" .set push \n"
" .set noat \n"
" .set mips3 \n"
- "1: ll %0, %2 \n"
- " bne %0, %z3, 3f \n"
+ "1: ll %1, %3 \n"
+ " bne %1, %z4, 3f \n"
" .set mips0 \n"
- " move $1, %z4 \n"
+ " move $1, %z5 \n"
" .set mips3 \n"
- "2: sc $1, %1 \n"
+ "2: sc $1, %2 \n"
" beqzl $1, 1b \n"
__WEAK_LLSC_MB
"3: \n"
" .set pop \n"
" .section .fixup,\"ax\" \n"
- "4: li %0, %5 \n"
+ "4: li %0, %6 \n"
" j 3b \n"
" .previous \n"
" .section __ex_table,\"a\" \n"
" "__UA_ADDR "\t1b, 4b \n"
" "__UA_ADDR "\t2b, 4b \n"
" .previous \n"
- : "=&r" (retval), "=R" (*uaddr)
+ : "+r" (ret), "=&r" (val), "=R" (*uaddr)
: "R" (*uaddr), "Jr" (oldval), "Jr" (newval), "i" (-EFAULT)
: "memory");
} else if (cpu_has_llsc) {
@@ -172,31 +174,32 @@ futex_atomic_cmpxchg_inatomic(int __user *uaddr, int oldval, int newval)
" .set push \n"
" .set noat \n"
" .set mips3 \n"
- "1: ll %0, %2 \n"
- " bne %0, %z3, 3f \n"
+ "1: ll %1, %3 \n"
+ " bne %1, %z4, 3f \n"
" .set mips0 \n"
- " move $1, %z4 \n"
+ " move $1, %z5 \n"
" .set mips3 \n"
- "2: sc $1, %1 \n"
+ "2: sc $1, %2 \n"
" beqz $1, 1b \n"
__WEAK_LLSC_MB
"3: \n"
" .set pop \n"
" .section .fixup,\"ax\" \n"
- "4: li %0, %5 \n"
+ "4: li %0, %6 \n"
" j 3b \n"
" .previous \n"
" .section __ex_table,\"a\" \n"
" "__UA_ADDR "\t1b, 4b \n"
" "__UA_ADDR "\t2b, 4b \n"
" .previous \n"
- : "=&r" (retval), "=R" (*uaddr)
+ : "+r" (ret), "=&r" (val), "=R" (*uaddr)
: "R" (*uaddr), "Jr" (oldval), "Jr" (newval), "i" (-EFAULT)
: "memory");
} else
return -ENOSYS;
- return retval;
+ *uval = val;
+ return ret;
}
#endif
diff --git a/trunk/arch/mips/include/asm/perf_event.h b/trunk/arch/mips/include/asm/perf_event.h
index e00007cf8162..d0c77496c728 100644
--- a/trunk/arch/mips/include/asm/perf_event.h
+++ b/trunk/arch/mips/include/asm/perf_event.h
@@ -11,15 +11,5 @@
#ifndef __MIPS_PERF_EVENT_H__
#define __MIPS_PERF_EVENT_H__
-
-/*
- * MIPS performance counters do not raise NMI upon overflow, a regular
- * interrupt will be signaled. Hence we can do the pending perf event
- * work at the tail of the irq handler.
- */
-static inline void
-set_perf_event_pending(void)
-{
-}
-
+/* Leave it empty here. The file is required by linux/perf_event.h */
#endif /* __MIPS_PERF_EVENT_H__ */
diff --git a/trunk/arch/mips/kernel/ftrace.c b/trunk/arch/mips/kernel/ftrace.c
index 5a84a1f11231..94ca2b018af7 100644
--- a/trunk/arch/mips/kernel/ftrace.c
+++ b/trunk/arch/mips/kernel/ftrace.c
@@ -17,29 +17,13 @@
#include
#include
-/*
- * If the Instruction Pointer is in module space (0xc0000000), return true;
- * otherwise, it is in kernel space (0x80000000), return false.
- *
- * FIXME: This will not work when the kernel space and module space are the
- * same. If they are the same, we need to modify scripts/recordmcount.pl,
- * ftrace_make_nop/call() and the other related parts to ensure the
- * enabling/disabling of the calling site to _mcount is right for both kernel
- * and module.
- */
-
-static inline int in_module(unsigned long ip)
-{
- return ip & 0x40000000;
-}
+#include
#ifdef CONFIG_DYNAMIC_FTRACE
#define JAL 0x0c000000 /* jump & link: ip --> ra, jump to target */
#define ADDR_MASK 0x03ffffff /* op_code|addr : 31...26|25 ....0 */
-#define INSN_B_1F_4 0x10000004 /* b 1f; offset = 4 */
-#define INSN_B_1F_5 0x10000005 /* b 1f; offset = 5 */
#define INSN_NOP 0x00000000 /* nop */
#define INSN_JAL(addr) \
((unsigned int)(JAL | (((addr) >> 2) & ADDR_MASK)))
@@ -69,6 +53,20 @@ static inline void ftrace_dyn_arch_init_insns(void)
#endif
}
+/*
+ * Check if the address is in kernel space
+ *
+ * Clone core_kernel_text() from kernel/extable.c, but doesn't call
+ * init_kernel_text() for Ftrace doesn't trace functions in init sections.
+ */
+static inline int in_kernel_space(unsigned long ip)
+{
+ if (ip >= (unsigned long)_stext &&
+ ip <= (unsigned long)_etext)
+ return 1;
+ return 0;
+}
+
static int ftrace_modify_code(unsigned long ip, unsigned int new_code)
{
int faulted;
@@ -84,6 +82,42 @@ static int ftrace_modify_code(unsigned long ip, unsigned int new_code)
return 0;
}
+/*
+ * The details about the calling site of mcount on MIPS
+ *
+ * 1. For kernel:
+ *
+ * move at, ra
+ * jal _mcount --> nop
+ *
+ * 2. For modules:
+ *
+ * 2.1 For KBUILD_MCOUNT_RA_ADDRESS and CONFIG_32BIT
+ *
+ * lui v1, hi_16bit_of_mcount --> b 1f (0x10000005)
+ * addiu v1, v1, low_16bit_of_mcount
+ * move at, ra
+ * move $12, ra_address
+ * jalr v1
+ * sub sp, sp, 8
+ * 1: offset = 5 instructions
+ * 2.2 For the Other situations
+ *
+ * lui v1, hi_16bit_of_mcount --> b 1f (0x10000004)
+ * addiu v1, v1, low_16bit_of_mcount
+ * move at, ra
+ * jalr v1
+ * nop | move $12, ra_address | sub sp, sp, 8
+ * 1: offset = 4 instructions
+ */
+
+#if defined(KBUILD_MCOUNT_RA_ADDRESS) && defined(CONFIG_32BIT)
+#define MCOUNT_OFFSET_INSNS 5
+#else
+#define MCOUNT_OFFSET_INSNS 4
+#endif
+#define INSN_B_1F (0x10000000 | MCOUNT_OFFSET_INSNS)
+
int ftrace_make_nop(struct module *mod,
struct dyn_ftrace *rec, unsigned long addr)
{
@@ -91,39 +125,11 @@ int ftrace_make_nop(struct module *mod,
unsigned long ip = rec->ip;
/*
- * We have compiled module with -mlong-calls, but compiled the kernel
- * without it, we need to cope with them respectively.
+ * If ip is in kernel space, no long call, otherwise, long call is
+ * needed.
*/
- if (in_module(ip)) {
-#if defined(KBUILD_MCOUNT_RA_ADDRESS) && defined(CONFIG_32BIT)
- /*
- * lui v1, hi_16bit_of_mcount --> b 1f (0x10000005)
- * addiu v1, v1, low_16bit_of_mcount
- * move at, ra
- * move $12, ra_address
- * jalr v1
- * sub sp, sp, 8
- * 1: offset = 5 instructions
- */
- new = INSN_B_1F_5;
-#else
- /*
- * lui v1, hi_16bit_of_mcount --> b 1f (0x10000004)
- * addiu v1, v1, low_16bit_of_mcount
- * move at, ra
- * jalr v1
- * nop | move $12, ra_address | sub sp, sp, 8
- * 1: offset = 4 instructions
- */
- new = INSN_B_1F_4;
-#endif
- } else {
- /*
- * move at, ra
- * jal _mcount --> nop
- */
- new = INSN_NOP;
- }
+ new = in_kernel_space(ip) ? INSN_NOP : INSN_B_1F;
+
return ftrace_modify_code(ip, new);
}
@@ -132,8 +138,8 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
unsigned int new;
unsigned long ip = rec->ip;
- /* ip, module: 0xc0000000, kernel: 0x80000000 */
- new = in_module(ip) ? insn_lui_v1_hi16_mcount : insn_jal_ftrace_caller;
+ new = in_kernel_space(ip) ? insn_jal_ftrace_caller :
+ insn_lui_v1_hi16_mcount;
return ftrace_modify_code(ip, new);
}
@@ -190,29 +196,25 @@ int ftrace_disable_ftrace_graph_caller(void)
#define S_R_SP (0xafb0 << 16) /* s{d,w} R, offset(sp) */
#define OFFSET_MASK 0xffff /* stack offset range: 0 ~ PT_SIZE */
-unsigned long ftrace_get_parent_addr(unsigned long self_addr,
- unsigned long parent,
- unsigned long parent_addr,
- unsigned long fp)
+unsigned long ftrace_get_parent_ra_addr(unsigned long self_ra, unsigned long
+ old_parent_ra, unsigned long parent_ra_addr, unsigned long fp)
{
- unsigned long sp, ip, ra;
+ unsigned long sp, ip, tmp;
unsigned int code;
int faulted;
/*
- * For module, move the ip from calling site of mcount to the
- * instruction "lui v1, hi_16bit_of_mcount"(offset is 20), but for
- * kernel, move to the instruction "move ra, at"(offset is 12)
+ * For module, move the ip from the return address after the
+ * instruction "lui v1, hi_16bit_of_mcount"(offset is 24), but for
+ * kernel, move after the instruction "move ra, at"(offset is 16)
*/
- ip = self_addr - (in_module(self_addr) ? 20 : 12);
+ ip = self_ra - (in_kernel_space(self_ra) ? 16 : 24);
/*
* search the text until finding the non-store instruction or "s{d,w}
* ra, offset(sp)" instruction
*/
do {
- ip -= 4;
-
/* get the code at "ip": code = *(unsigned int *)ip; */
safe_load_code(code, ip, faulted);
@@ -224,18 +226,20 @@ unsigned long ftrace_get_parent_addr(unsigned long self_addr,
* store the ra on the stack
*/
if ((code & S_R_SP) != S_R_SP)
- return parent_addr;
+ return parent_ra_addr;
- } while (((code & S_RA_SP) != S_RA_SP));
+ /* Move to the next instruction */
+ ip -= 4;
+ } while ((code & S_RA_SP) != S_RA_SP);
sp = fp + (code & OFFSET_MASK);
- /* ra = *(unsigned long *)sp; */
- safe_load_stack(ra, sp, faulted);
+ /* tmp = *(unsigned long *)sp; */
+ safe_load_stack(tmp, sp, faulted);
if (unlikely(faulted))
return 0;
- if (ra == parent)
+ if (tmp == old_parent_ra)
return sp;
return 0;
}
@@ -246,21 +250,21 @@ unsigned long ftrace_get_parent_addr(unsigned long self_addr,
* Hook the return address and push it in the stack of return addrs
* in current thread info.
*/
-void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr,
+void prepare_ftrace_return(unsigned long *parent_ra_addr, unsigned long self_ra,
unsigned long fp)
{
- unsigned long old;
+ unsigned long old_parent_ra;
struct ftrace_graph_ent trace;
unsigned long return_hooker = (unsigned long)
&return_to_handler;
- int faulted;
+ int faulted, insns;
if (unlikely(atomic_read(¤t->tracing_graph_pause)))
return;
/*
- * "parent" is the stack address saved the return address of the caller
- * of _mcount.
+ * "parent_ra_addr" is the stack address saved the return address of
+ * the caller of _mcount.
*
* if the gcc < 4.5, a leaf function does not save the return address
* in the stack address, so, we "emulate" one in _mcount's stack space,
@@ -275,37 +279,44 @@ void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr,
* do it in ftrace_graph_caller of mcount.S.
*/
- /* old = *parent; */
- safe_load_stack(old, parent, faulted);
+ /* old_parent_ra = *parent_ra_addr; */
+ safe_load_stack(old_parent_ra, parent_ra_addr, faulted);
if (unlikely(faulted))
goto out;
#ifndef KBUILD_MCOUNT_RA_ADDRESS
- parent = (unsigned long *)ftrace_get_parent_addr(self_addr, old,
- (unsigned long)parent, fp);
+ parent_ra_addr = (unsigned long *)ftrace_get_parent_ra_addr(self_ra,
+ old_parent_ra, (unsigned long)parent_ra_addr, fp);
/*
* If fails when getting the stack address of the non-leaf function's
* ra, stop function graph tracer and return
*/
- if (parent == 0)
+ if (parent_ra_addr == 0)
goto out;
#endif
- /* *parent = return_hooker; */
- safe_store_stack(return_hooker, parent, faulted);
+ /* *parent_ra_addr = return_hooker; */
+ safe_store_stack(return_hooker, parent_ra_addr, faulted);
if (unlikely(faulted))
goto out;
- if (ftrace_push_return_trace(old, self_addr, &trace.depth, fp) ==
- -EBUSY) {
- *parent = old;
+ if (ftrace_push_return_trace(old_parent_ra, self_ra, &trace.depth, fp)
+ == -EBUSY) {
+ *parent_ra_addr = old_parent_ra;
return;
}
- trace.func = self_addr;
+ /*
+ * Get the recorded ip of the current mcount calling site in the
+ * __mcount_loc section, which will be used to filter the function
+ * entries configured through the tracing/set_graph_function interface.
+ */
+
+ insns = in_kernel_space(self_ra) ? 2 : MCOUNT_OFFSET_INSNS + 1;
+ trace.func = self_ra - (MCOUNT_INSN_SIZE * insns);
/* Only trace if the calling function expects to */
if (!ftrace_graph_entry(&trace)) {
current->curr_ret_stack--;
- *parent = old;
+ *parent_ra_addr = old_parent_ra;
}
return;
out:
diff --git a/trunk/arch/mips/kernel/perf_event.c b/trunk/arch/mips/kernel/perf_event.c
index 2b7f3f703b83..a8244854d3dc 100644
--- a/trunk/arch/mips/kernel/perf_event.c
+++ b/trunk/arch/mips/kernel/perf_event.c
@@ -161,41 +161,6 @@ mipspmu_event_set_period(struct perf_event *event,
return ret;
}
-static int mipspmu_enable(struct perf_event *event)
-{
- struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
- struct hw_perf_event *hwc = &event->hw;
- int idx;
- int err = 0;
-
- /* To look for a free counter for this event. */
- idx = mipspmu->alloc_counter(cpuc, hwc);
- if (idx < 0) {
- err = idx;
- goto out;
- }
-
- /*
- * If there is an event in the counter we are going to use then
- * make sure it is disabled.
- */
- event->hw.idx = idx;
- mipspmu->disable_event(idx);
- cpuc->events[idx] = event;
-
- /* Set the period for the event. */
- mipspmu_event_set_period(event, hwc, idx);
-
- /* Enable the event. */
- mipspmu->enable_event(hwc, idx);
-
- /* Propagate our changes to the userspace mapping. */
- perf_event_update_userpage(event);
-
-out:
- return err;
-}
-
static void mipspmu_event_update(struct perf_event *event,
struct hw_perf_event *hwc,
int idx)
@@ -204,7 +169,7 @@ static void mipspmu_event_update(struct perf_event *event,
unsigned long flags;
int shift = 64 - TOTAL_BITS;
s64 prev_raw_count, new_raw_count;
- s64 delta;
+ u64 delta;
again:
prev_raw_count = local64_read(&hwc->prev_count);
@@ -231,32 +196,90 @@ static void mipspmu_event_update(struct perf_event *event,
return;
}
-static void mipspmu_disable(struct perf_event *event)
+static void mipspmu_start(struct perf_event *event, int flags)
+{
+ struct hw_perf_event *hwc = &event->hw;
+
+ if (!mipspmu)
+ return;
+
+ if (flags & PERF_EF_RELOAD)
+ WARN_ON_ONCE(!(hwc->state & PERF_HES_UPTODATE));
+
+ hwc->state = 0;
+
+ /* Set the period for the event. */
+ mipspmu_event_set_period(event, hwc, hwc->idx);
+
+ /* Enable the event. */
+ mipspmu->enable_event(hwc, hwc->idx);
+}
+
+static void mipspmu_stop(struct perf_event *event, int flags)
+{
+ struct hw_perf_event *hwc = &event->hw;
+
+ if (!mipspmu)
+ return;
+
+ if (!(hwc->state & PERF_HES_STOPPED)) {
+ /* We are working on a local event. */
+ mipspmu->disable_event(hwc->idx);
+ barrier();
+ mipspmu_event_update(event, hwc, hwc->idx);
+ hwc->state |= PERF_HES_STOPPED | PERF_HES_UPTODATE;
+ }
+}
+
+static int mipspmu_add(struct perf_event *event, int flags)
{
struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
struct hw_perf_event *hwc = &event->hw;
- int idx = hwc->idx;
+ int idx;
+ int err = 0;
+ perf_pmu_disable(event->pmu);
- WARN_ON(idx < 0 || idx >= mipspmu->num_counters);
+ /* To look for a free counter for this event. */
+ idx = mipspmu->alloc_counter(cpuc, hwc);
+ if (idx < 0) {
+ err = idx;
+ goto out;
+ }
- /* We are working on a local event. */
+ /*
+ * If there is an event in the counter we are going to use then
+ * make sure it is disabled.
+ */
+ event->hw.idx = idx;
mipspmu->disable_event(idx);
+ cpuc->events[idx] = event;
- barrier();
-
- mipspmu_event_update(event, hwc, idx);
- cpuc->events[idx] = NULL;
- clear_bit(idx, cpuc->used_mask);
+ hwc->state = PERF_HES_STOPPED | PERF_HES_UPTODATE;
+ if (flags & PERF_EF_START)
+ mipspmu_start(event, PERF_EF_RELOAD);
+ /* Propagate our changes to the userspace mapping. */
perf_event_update_userpage(event);
+
+out:
+ perf_pmu_enable(event->pmu);
+ return err;
}
-static void mipspmu_unthrottle(struct perf_event *event)
+static void mipspmu_del(struct perf_event *event, int flags)
{
+ struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
struct hw_perf_event *hwc = &event->hw;
+ int idx = hwc->idx;
- mipspmu->enable_event(hwc, hwc->idx);
+ WARN_ON(idx < 0 || idx >= mipspmu->num_counters);
+
+ mipspmu_stop(event, PERF_EF_UPDATE);
+ cpuc->events[idx] = NULL;
+ clear_bit(idx, cpuc->used_mask);
+
+ perf_event_update_userpage(event);
}
static void mipspmu_read(struct perf_event *event)
@@ -270,12 +293,17 @@ static void mipspmu_read(struct perf_event *event)
mipspmu_event_update(event, hwc, hwc->idx);
}
-static struct pmu pmu = {
- .enable = mipspmu_enable,
- .disable = mipspmu_disable,
- .unthrottle = mipspmu_unthrottle,
- .read = mipspmu_read,
-};
+static void mipspmu_enable(struct pmu *pmu)
+{
+ if (mipspmu)
+ mipspmu->start();
+}
+
+static void mipspmu_disable(struct pmu *pmu)
+{
+ if (mipspmu)
+ mipspmu->stop();
+}
static atomic_t active_events = ATOMIC_INIT(0);
static DEFINE_MUTEX(pmu_reserve_mutex);
@@ -318,6 +346,82 @@ static void mipspmu_free_irq(void)
perf_irq = save_perf_irq;
}
+/*
+ * mipsxx/rm9000/loongson2 have different performance counters, they have
+ * specific low-level init routines.
+ */
+static void reset_counters(void *arg);
+static int __hw_perf_event_init(struct perf_event *event);
+
+static void hw_perf_event_destroy(struct perf_event *event)
+{
+ if (atomic_dec_and_mutex_lock(&active_events,
+ &pmu_reserve_mutex)) {
+ /*
+ * We must not call the destroy function with interrupts
+ * disabled.
+ */
+ on_each_cpu(reset_counters,
+ (void *)(long)mipspmu->num_counters, 1);
+ mipspmu_free_irq();
+ mutex_unlock(&pmu_reserve_mutex);
+ }
+}
+
+static int mipspmu_event_init(struct perf_event *event)
+{
+ int err = 0;
+
+ switch (event->attr.type) {
+ case PERF_TYPE_RAW:
+ case PERF_TYPE_HARDWARE:
+ case PERF_TYPE_HW_CACHE:
+ break;
+
+ default:
+ return -ENOENT;
+ }
+
+ if (!mipspmu || event->cpu >= nr_cpumask_bits ||
+ (event->cpu >= 0 && !cpu_online(event->cpu)))
+ return -ENODEV;
+
+ if (!atomic_inc_not_zero(&active_events)) {
+ if (atomic_read(&active_events) > MIPS_MAX_HWEVENTS) {
+ atomic_dec(&active_events);
+ return -ENOSPC;
+ }
+
+ mutex_lock(&pmu_reserve_mutex);
+ if (atomic_read(&active_events) == 0)
+ err = mipspmu_get_irq();
+
+ if (!err)
+ atomic_inc(&active_events);
+ mutex_unlock(&pmu_reserve_mutex);
+ }
+
+ if (err)
+ return err;
+
+ err = __hw_perf_event_init(event);
+ if (err)
+ hw_perf_event_destroy(event);
+
+ return err;
+}
+
+static struct pmu pmu = {
+ .pmu_enable = mipspmu_enable,
+ .pmu_disable = mipspmu_disable,
+ .event_init = mipspmu_event_init,
+ .add = mipspmu_add,
+ .del = mipspmu_del,
+ .start = mipspmu_start,
+ .stop = mipspmu_stop,
+ .read = mipspmu_read,
+};
+
static inline unsigned int
mipspmu_perf_event_encode(const struct mips_perf_event *pev)
{
@@ -382,8 +486,9 @@ static int validate_event(struct cpu_hw_events *cpuc,
{
struct hw_perf_event fake_hwc = event->hw;
- if (event->pmu && event->pmu != &pmu)
- return 0;
+ /* Allow mixed event group. So return 1 to pass validation. */
+ if (event->pmu != &pmu || event->state <= PERF_EVENT_STATE_OFF)
+ return 1;
return mipspmu->alloc_counter(cpuc, &fake_hwc) >= 0;
}
@@ -409,73 +514,6 @@ static int validate_group(struct perf_event *event)
return 0;
}
-/*
- * mipsxx/rm9000/loongson2 have different performance counters, they have
- * specific low-level init routines.
- */
-static void reset_counters(void *arg);
-static int __hw_perf_event_init(struct perf_event *event);
-
-static void hw_perf_event_destroy(struct perf_event *event)
-{
- if (atomic_dec_and_mutex_lock(&active_events,
- &pmu_reserve_mutex)) {
- /*
- * We must not call the destroy function with interrupts
- * disabled.
- */
- on_each_cpu(reset_counters,
- (void *)(long)mipspmu->num_counters, 1);
- mipspmu_free_irq();
- mutex_unlock(&pmu_reserve_mutex);
- }
-}
-
-const struct pmu *hw_perf_event_init(struct perf_event *event)
-{
- int err = 0;
-
- if (!mipspmu || event->cpu >= nr_cpumask_bits ||
- (event->cpu >= 0 && !cpu_online(event->cpu)))
- return ERR_PTR(-ENODEV);
-
- if (!atomic_inc_not_zero(&active_events)) {
- if (atomic_read(&active_events) > MIPS_MAX_HWEVENTS) {
- atomic_dec(&active_events);
- return ERR_PTR(-ENOSPC);
- }
-
- mutex_lock(&pmu_reserve_mutex);
- if (atomic_read(&active_events) == 0)
- err = mipspmu_get_irq();
-
- if (!err)
- atomic_inc(&active_events);
- mutex_unlock(&pmu_reserve_mutex);
- }
-
- if (err)
- return ERR_PTR(err);
-
- err = __hw_perf_event_init(event);
- if (err)
- hw_perf_event_destroy(event);
-
- return err ? ERR_PTR(err) : &pmu;
-}
-
-void hw_perf_enable(void)
-{
- if (mipspmu)
- mipspmu->start();
-}
-
-void hw_perf_disable(void)
-{
- if (mipspmu)
- mipspmu->stop();
-}
-
/* This is needed by specific irq handlers in perf_event_*.c */
static void
handle_associated_event(struct cpu_hw_events *cpuc,
@@ -496,21 +534,13 @@ handle_associated_event(struct cpu_hw_events *cpuc,
#include "perf_event_mipsxx.c"
/* Callchain handling code. */
-static inline void
-callchain_store(struct perf_callchain_entry *entry,
- u64 ip)
-{
- if (entry->nr < PERF_MAX_STACK_DEPTH)
- entry->ip[entry->nr++] = ip;
-}
/*
* Leave userspace callchain empty for now. When we find a way to trace
* the user stack callchains, we add here.
*/
-static void
-perf_callchain_user(struct pt_regs *regs,
- struct perf_callchain_entry *entry)
+void perf_callchain_user(struct perf_callchain_entry *entry,
+ struct pt_regs *regs)
{
}
@@ -523,23 +553,21 @@ static void save_raw_perf_callchain(struct perf_callchain_entry *entry,
while (!kstack_end(sp)) {
addr = *sp++;
if (__kernel_text_address(addr)) {
- callchain_store(entry, addr);
+ perf_callchain_store(entry, addr);
if (entry->nr >= PERF_MAX_STACK_DEPTH)
break;
}
}
}
-static void
-perf_callchain_kernel(struct pt_regs *regs,
- struct perf_callchain_entry *entry)
+void perf_callchain_kernel(struct perf_callchain_entry *entry,
+ struct pt_regs *regs)
{
unsigned long sp = regs->regs[29];
#ifdef CONFIG_KALLSYMS
unsigned long ra = regs->regs[31];
unsigned long pc = regs->cp0_epc;
- callchain_store(entry, PERF_CONTEXT_KERNEL);
if (raw_show_trace || !__kernel_text_address(pc)) {
unsigned long stack_page =
(unsigned long)task_stack_page(current);
@@ -549,53 +577,12 @@ perf_callchain_kernel(struct pt_regs *regs,
return;
}
do {
- callchain_store(entry, pc);
+ perf_callchain_store(entry, pc);
if (entry->nr >= PERF_MAX_STACK_DEPTH)
break;
pc = unwind_stack(current, &sp, pc, &ra);
} while (pc);
#else
- callchain_store(entry, PERF_CONTEXT_KERNEL);
save_raw_perf_callchain(entry, sp);
#endif
}
-
-static void
-perf_do_callchain(struct pt_regs *regs,
- struct perf_callchain_entry *entry)
-{
- int is_user;
-
- if (!regs)
- return;
-
- is_user = user_mode(regs);
-
- if (!current || !current->pid)
- return;
-
- if (is_user && current->state != TASK_RUNNING)
- return;
-
- if (!is_user) {
- perf_callchain_kernel(regs, entry);
- if (current->mm)
- regs = task_pt_regs(current);
- else
- regs = NULL;
- }
- if (regs)
- perf_callchain_user(regs, entry);
-}
-
-static DEFINE_PER_CPU(struct perf_callchain_entry, pmc_irq_entry);
-
-struct perf_callchain_entry *
-perf_callchain(struct pt_regs *regs)
-{
- struct perf_callchain_entry *entry = &__get_cpu_var(pmc_irq_entry);
-
- entry->nr = 0;
- perf_do_callchain(regs, entry);
- return entry;
-}
diff --git a/trunk/arch/mips/kernel/perf_event_mipsxx.c b/trunk/arch/mips/kernel/perf_event_mipsxx.c
index 183e0d226669..d9a7db78ed62 100644
--- a/trunk/arch/mips/kernel/perf_event_mipsxx.c
+++ b/trunk/arch/mips/kernel/perf_event_mipsxx.c
@@ -696,7 +696,7 @@ static int mipsxx_pmu_handle_shared_irq(void)
* interrupt, not NMI.
*/
if (handled == IRQ_HANDLED)
- perf_event_do_pending();
+ irq_work_run();
#ifdef CONFIG_MIPS_MT_SMP
read_unlock(&pmuint_rwlock);
@@ -1045,6 +1045,8 @@ init_hw_perf_events(void)
"CPU, irq %d%s\n", mipspmu->name, counters, irq,
irq < 0 ? " (share with timer interrupt)" : "");
+ perf_pmu_register(&pmu, "cpu", PERF_TYPE_RAW);
+
return 0;
}
early_initcall(init_hw_perf_events);
diff --git a/trunk/arch/mips/kernel/signal.c b/trunk/arch/mips/kernel/signal.c
index 5922342bca39..dbbe0ce48d89 100644
--- a/trunk/arch/mips/kernel/signal.c
+++ b/trunk/arch/mips/kernel/signal.c
@@ -84,7 +84,7 @@ static int protected_save_fp_context(struct sigcontext __user *sc)
static int protected_restore_fp_context(struct sigcontext __user *sc)
{
- int err, tmp;
+ int err, tmp __maybe_unused;
while (1) {
lock_fpu_owner();
own_fpu_inatomic(0);
diff --git a/trunk/arch/mips/kernel/signal32.c b/trunk/arch/mips/kernel/signal32.c
index a0ed0e052b2e..aae986613795 100644
--- a/trunk/arch/mips/kernel/signal32.c
+++ b/trunk/arch/mips/kernel/signal32.c
@@ -115,7 +115,7 @@ static int protected_save_fp_context32(struct sigcontext32 __user *sc)
static int protected_restore_fp_context32(struct sigcontext32 __user *sc)
{
- int err, tmp;
+ int err, tmp __maybe_unused;
while (1) {
lock_fpu_owner();
own_fpu_inatomic(0);
diff --git a/trunk/arch/mips/kernel/smp.c b/trunk/arch/mips/kernel/smp.c
index 383aeb95cb49..32a256101082 100644
--- a/trunk/arch/mips/kernel/smp.c
+++ b/trunk/arch/mips/kernel/smp.c
@@ -193,6 +193,22 @@ void __devinit smp_prepare_boot_cpu(void)
*/
static struct task_struct *cpu_idle_thread[NR_CPUS];
+struct create_idle {
+ struct work_struct work;
+ struct task_struct *idle;
+ struct completion done;
+ int cpu;
+};
+
+static void __cpuinit do_fork_idle(struct work_struct *work)
+{
+ struct create_idle *c_idle =
+ container_of(work, struct create_idle, work);
+
+ c_idle->idle = fork_idle(c_idle->cpu);
+ complete(&c_idle->done);
+}
+
int __cpuinit __cpu_up(unsigned int cpu)
{
struct task_struct *idle;
@@ -203,8 +219,19 @@ int __cpuinit __cpu_up(unsigned int cpu)
* Linux can schedule processes on this slave.
*/
if (!cpu_idle_thread[cpu]) {
- idle = fork_idle(cpu);
- cpu_idle_thread[cpu] = idle;
+ /*
+ * Schedule work item to avoid forking user task
+ * Ported from arch/x86/kernel/smpboot.c
+ */
+ struct create_idle c_idle = {
+ .cpu = cpu,
+ .done = COMPLETION_INITIALIZER_ONSTACK(c_idle.done),
+ };
+
+ INIT_WORK_ONSTACK(&c_idle.work, do_fork_idle);
+ schedule_work(&c_idle.work);
+ wait_for_completion(&c_idle.done);
+ idle = cpu_idle_thread[cpu] = c_idle.idle;
if (IS_ERR(idle))
panic(KERN_ERR "Fork failed for CPU %d", cpu);
diff --git a/trunk/arch/mips/kernel/syscall.c b/trunk/arch/mips/kernel/syscall.c
index 1dc6edff45e0..58beabf50b3c 100644
--- a/trunk/arch/mips/kernel/syscall.c
+++ b/trunk/arch/mips/kernel/syscall.c
@@ -383,12 +383,11 @@ save_static_function(sys_sysmips);
static int __used noinline
_sys_sysmips(nabi_no_regargs struct pt_regs regs)
{
- long cmd, arg1, arg2, arg3;
+ long cmd, arg1, arg2;
cmd = regs.regs[4];
arg1 = regs.regs[5];
arg2 = regs.regs[6];
- arg3 = regs.regs[7];
switch (cmd) {
case MIPS_ATOMIC_SET:
@@ -405,7 +404,7 @@ _sys_sysmips(nabi_no_regargs struct pt_regs regs)
if (arg1 & 2)
set_thread_flag(TIF_LOGADE);
else
- clear_thread_flag(TIF_FIXADE);
+ clear_thread_flag(TIF_LOGADE);
return 0;
diff --git a/trunk/arch/mips/kernel/vmlinux.lds.S b/trunk/arch/mips/kernel/vmlinux.lds.S
index 570607b376b5..832afbb87588 100644
--- a/trunk/arch/mips/kernel/vmlinux.lds.S
+++ b/trunk/arch/mips/kernel/vmlinux.lds.S
@@ -115,7 +115,7 @@ SECTIONS
EXIT_DATA
}
- PERCPU(PAGE_SIZE)
+ PERCPU(1 << CONFIG_MIPS_L1_CACHE_SHIFT, PAGE_SIZE)
. = ALIGN(PAGE_SIZE);
__init_end = .;
/* freed after init ends here */
diff --git a/trunk/arch/mips/kernel/vpe.c b/trunk/arch/mips/kernel/vpe.c
index 6a1fdfef8fde..ab52b7cf3b6b 100644
--- a/trunk/arch/mips/kernel/vpe.c
+++ b/trunk/arch/mips/kernel/vpe.c
@@ -148,9 +148,9 @@ struct {
spinlock_t tc_list_lock;
struct list_head tc_list; /* Thread contexts */
} vpecontrol = {
- .vpe_list_lock = SPIN_LOCK_UNLOCKED,
+ .vpe_list_lock = __SPIN_LOCK_UNLOCKED(vpe_list_lock),
.vpe_list = LIST_HEAD_INIT(vpecontrol.vpe_list),
- .tc_list_lock = SPIN_LOCK_UNLOCKED,
+ .tc_list_lock = __SPIN_LOCK_UNLOCKED(tc_list_lock),
.tc_list = LIST_HEAD_INIT(vpecontrol.tc_list)
};
diff --git a/trunk/arch/mips/loongson/Kconfig b/trunk/arch/mips/loongson/Kconfig
index 6e1b77fec7ea..aca93eed8779 100644
--- a/trunk/arch/mips/loongson/Kconfig
+++ b/trunk/arch/mips/loongson/Kconfig
@@ -1,6 +1,7 @@
+if MACH_LOONGSON
+
choice
prompt "Machine Type"
- depends on MACH_LOONGSON
config LEMOTE_FULOONG2E
bool "Lemote Fuloong(2e) mini-PC"
@@ -87,3 +88,5 @@ config LOONGSON_UART_BASE
config LOONGSON_MC146818
bool
default n
+
+endif # MACH_LOONGSON
diff --git a/trunk/arch/mips/loongson/common/cmdline.c b/trunk/arch/mips/loongson/common/cmdline.c
index 1a06defc4f7f..353e1d2e41a5 100644
--- a/trunk/arch/mips/loongson/common/cmdline.c
+++ b/trunk/arch/mips/loongson/common/cmdline.c
@@ -44,10 +44,5 @@ void __init prom_init_cmdline(void)
strcat(arcs_cmdline, " ");
}
- if ((strstr(arcs_cmdline, "console=")) == NULL)
- strcat(arcs_cmdline, " console=ttyS0,115200");
- if ((strstr(arcs_cmdline, "root=")) == NULL)
- strcat(arcs_cmdline, " root=/dev/hda1");
-
prom_init_machtype();
}
diff --git a/trunk/arch/mips/loongson/common/machtype.c b/trunk/arch/mips/loongson/common/machtype.c
index 81fbe6b73f91..2efd5d9dee27 100644
--- a/trunk/arch/mips/loongson/common/machtype.c
+++ b/trunk/arch/mips/loongson/common/machtype.c
@@ -41,7 +41,7 @@ void __weak __init mach_prom_init_machtype(void)
void __init prom_init_machtype(void)
{
- char *p, str[MACHTYPE_LEN];
+ char *p, str[MACHTYPE_LEN + 1];
int machtype = MACH_LEMOTE_FL2E;
mips_machtype = LOONGSON_MACHTYPE;
@@ -53,6 +53,7 @@ void __init prom_init_machtype(void)
}
p += strlen("machtype=");
strncpy(str, p, MACHTYPE_LEN);
+ str[MACHTYPE_LEN] = '\0';
p = strstr(str, " ");
if (p)
*p = '\0';
diff --git a/trunk/arch/mips/math-emu/ieee754int.h b/trunk/arch/mips/math-emu/ieee754int.h
index 2701d9500959..2a7d43f4f161 100644
--- a/trunk/arch/mips/math-emu/ieee754int.h
+++ b/trunk/arch/mips/math-emu/ieee754int.h
@@ -70,7 +70,7 @@
#define COMPXSP \
- unsigned xm; int xe; int xs; int xc
+ unsigned xm; int xe; int xs __maybe_unused; int xc
#define COMPYSP \
unsigned ym; int ye; int ys; int yc
@@ -104,7 +104,7 @@
#define COMPXDP \
-u64 xm; int xe; int xs; int xc
+u64 xm; int xe; int xs __maybe_unused; int xc
#define COMPYDP \
u64 ym; int ye; int ys; int yc
diff --git a/trunk/arch/mips/mm/init.c b/trunk/arch/mips/mm/init.c
index 2efcbd24c82f..279599e9a779 100644
--- a/trunk/arch/mips/mm/init.c
+++ b/trunk/arch/mips/mm/init.c
@@ -324,7 +324,7 @@ int page_is_ram(unsigned long pagenr)
void __init paging_init(void)
{
unsigned long max_zone_pfns[MAX_NR_ZONES];
- unsigned long lastpfn;
+ unsigned long lastpfn __maybe_unused;
pagetable_init();
diff --git a/trunk/arch/mips/mm/tlbex.c b/trunk/arch/mips/mm/tlbex.c
index 083d3412d0bc..04f9e17db9d0 100644
--- a/trunk/arch/mips/mm/tlbex.c
+++ b/trunk/arch/mips/mm/tlbex.c
@@ -109,6 +109,8 @@ static bool scratchpad_available(void)
static int scratchpad_offset(int i)
{
BUG();
+ /* Really unreachable, but evidently some GCC want this. */
+ return 0;
}
#endif
/*
diff --git a/trunk/arch/mips/pci/ops-pmcmsp.c b/trunk/arch/mips/pci/ops-pmcmsp.c
index b7c03d80c88c..68798f869c0f 100644
--- a/trunk/arch/mips/pci/ops-pmcmsp.c
+++ b/trunk/arch/mips/pci/ops-pmcmsp.c
@@ -308,7 +308,7 @@ static struct resource pci_mem_resource = {
* RETURNS: PCIBIOS_SUCCESSFUL - success
*
****************************************************************************/
-static int bpci_interrupt(int irq, void *dev_id)
+static irqreturn_t bpci_interrupt(int irq, void *dev_id)
{
struct msp_pci_regs *preg = (void *)PCI_BASE_REG;
unsigned int stat = preg->if_status;
@@ -326,7 +326,7 @@ static int bpci_interrupt(int irq, void *dev_id)
/* write to clear all asserted interrupts */
preg->if_status = stat;
- return PCIBIOS_SUCCESSFUL;
+ return IRQ_HANDLED;
}
/*****************************************************************************
diff --git a/trunk/arch/mips/pmc-sierra/Kconfig b/trunk/arch/mips/pmc-sierra/Kconfig
index c139988bb85d..8d798497c614 100644
--- a/trunk/arch/mips/pmc-sierra/Kconfig
+++ b/trunk/arch/mips/pmc-sierra/Kconfig
@@ -4,15 +4,11 @@ choice
config PMC_MSP4200_EVAL
bool "PMC-Sierra MSP4200 Eval Board"
- select CEVT_R4K
- select CSRC_R4K
select IRQ_MSP_SLP
select HW_HAS_PCI
config PMC_MSP4200_GW
bool "PMC-Sierra MSP4200 VoIP Gateway"
- select CEVT_R4K
- select CSRC_R4K
select IRQ_MSP_SLP
select HW_HAS_PCI
diff --git a/trunk/arch/mips/pmc-sierra/msp71xx/msp_time.c b/trunk/arch/mips/pmc-sierra/msp71xx/msp_time.c
index cca64e15f57f..01df84ce31e2 100644
--- a/trunk/arch/mips/pmc-sierra/msp71xx/msp_time.c
+++ b/trunk/arch/mips/pmc-sierra/msp71xx/msp_time.c
@@ -81,7 +81,7 @@ void __init plat_time_init(void)
mips_hpt_frequency = cpu_rate/2;
}
-unsigned int __init get_c0_compare_int(void)
+unsigned int __cpuinit get_c0_compare_int(void)
{
return MSP_INT_VPE0_TIMER;
}
diff --git a/trunk/arch/mn10300/include/asm/atomic.h b/trunk/arch/mn10300/include/asm/atomic.h
index 92d2f9298e38..9d773a639513 100644
--- a/trunk/arch/mn10300/include/asm/atomic.h
+++ b/trunk/arch/mn10300/include/asm/atomic.h
@@ -139,7 +139,7 @@ static inline unsigned long __cmpxchg(volatile unsigned long *m,
* Atomically reads the value of @v. Note that the guaranteed
* useful range of an atomic_t is only 24 bits.
*/
-#define atomic_read(v) ((v)->counter)
+#define atomic_read(v) (ACCESS_ONCE((v)->counter))
/**
* atomic_set - set atomic variable
diff --git a/trunk/arch/mn10300/include/asm/uaccess.h b/trunk/arch/mn10300/include/asm/uaccess.h
index 679dee0bbd08..3d6e60dad9d9 100644
--- a/trunk/arch/mn10300/include/asm/uaccess.h
+++ b/trunk/arch/mn10300/include/asm/uaccess.h
@@ -160,9 +160,10 @@ struct __large_struct { unsigned long buf[100]; };
#define __get_user_check(x, ptr, size) \
({ \
+ const __typeof__(ptr) __guc_ptr = (ptr); \
int _e; \
- if (likely(__access_ok((unsigned long) (ptr), (size)))) \
- _e = __get_user_nocheck((x), (ptr), (size)); \
+ if (likely(__access_ok((unsigned long) __guc_ptr, (size)))) \
+ _e = __get_user_nocheck((x), __guc_ptr, (size)); \
else { \
_e = -EFAULT; \
(x) = (__typeof__(x))0; \
diff --git a/trunk/arch/mn10300/kernel/time.c b/trunk/arch/mn10300/kernel/time.c
index 75da468090b9..5b955000626d 100644
--- a/trunk/arch/mn10300/kernel/time.c
+++ b/trunk/arch/mn10300/kernel/time.c
@@ -104,8 +104,6 @@ static irqreturn_t timer_interrupt(int irq, void *dev_id)
unsigned tsc, elapse;
irqreturn_t ret;
- write_seqlock(&xtime_lock);
-
while (tsc = get_cycles(),
elapse = tsc - mn10300_last_tsc, /* time elapsed since last
* tick */
@@ -114,11 +112,9 @@ static irqreturn_t timer_interrupt(int irq, void *dev_id)
mn10300_last_tsc += MN10300_TSC_PER_HZ;
/* advance the kernel's time tracking system */
- do_timer(1);
+ xtime_update(1);
}
- write_sequnlock(&xtime_lock);
-
ret = local_timer_interrupt();
#ifdef CONFIG_SMP
send_IPI_allbutself(LOCAL_TIMER_IPI);
diff --git a/trunk/arch/mn10300/kernel/vmlinux.lds.S b/trunk/arch/mn10300/kernel/vmlinux.lds.S
index febbeee7f2f5..968bcd2cb022 100644
--- a/trunk/arch/mn10300/kernel/vmlinux.lds.S
+++ b/trunk/arch/mn10300/kernel/vmlinux.lds.S
@@ -70,7 +70,7 @@ SECTIONS
.exit.text : { EXIT_TEXT; }
.exit.data : { EXIT_DATA; }
- PERCPU(PAGE_SIZE)
+ PERCPU(32, PAGE_SIZE)
. = ALIGN(PAGE_SIZE);
__init_end = .;
/* freed after init ends here */
diff --git a/trunk/arch/mn10300/mm/cache-inv-icache.c b/trunk/arch/mn10300/mm/cache-inv-icache.c
index a8933a60b2d4..a6b63dde603d 100644
--- a/trunk/arch/mn10300/mm/cache-inv-icache.c
+++ b/trunk/arch/mn10300/mm/cache-inv-icache.c
@@ -69,7 +69,7 @@ static void flush_icache_page_range(unsigned long start, unsigned long end)
/* invalidate the icache coverage on that region */
mn10300_local_icache_inv_range2(addr + off, size);
- smp_cache_call(SMP_ICACHE_INV_FLUSH_RANGE, start, end);
+ smp_cache_call(SMP_ICACHE_INV_RANGE, start, end);
}
/**
@@ -101,7 +101,7 @@ void flush_icache_range(unsigned long start, unsigned long end)
* directly */
start_page = (start >= 0x80000000UL) ? start : 0x80000000UL;
mn10300_icache_inv_range(start_page, end);
- smp_cache_call(SMP_ICACHE_INV_FLUSH_RANGE, start, end);
+ smp_cache_call(SMP_ICACHE_INV_RANGE, start, end);
if (start_page == start)
goto done;
end = start_page;
diff --git a/trunk/arch/parisc/hpux/sys_hpux.c b/trunk/arch/parisc/hpux/sys_hpux.c
index 30394081d9b6..6ab9580b0b00 100644
--- a/trunk/arch/parisc/hpux/sys_hpux.c
+++ b/trunk/arch/parisc/hpux/sys_hpux.c
@@ -185,26 +185,21 @@ struct hpux_statfs {
int16_t f_pad;
};
-static int do_statfs_hpux(struct path *path, struct hpux_statfs *buf)
+static int do_statfs_hpux(struct kstatfs *st, struct hpux_statfs __user *p)
{
- struct kstatfs st;
- int retval;
-
- retval = vfs_statfs(path, &st);
- if (retval)
- return retval;
-
- memset(buf, 0, sizeof(*buf));
- buf->f_type = st.f_type;
- buf->f_bsize = st.f_bsize;
- buf->f_blocks = st.f_blocks;
- buf->f_bfree = st.f_bfree;
- buf->f_bavail = st.f_bavail;
- buf->f_files = st.f_files;
- buf->f_ffree = st.f_ffree;
- buf->f_fsid[0] = st.f_fsid.val[0];
- buf->f_fsid[1] = st.f_fsid.val[1];
-
+ struct hpux_statfs buf;
+ memset(&buf, 0, sizeof(buf));
+ buf.f_type = st->f_type;
+ buf.f_bsize = st->f_bsize;
+ buf.f_blocks = st->f_blocks;
+ buf.f_bfree = st->f_bfree;
+ buf.f_bavail = st->f_bavail;
+ buf.f_files = st->f_files;
+ buf.f_ffree = st->f_ffree;
+ buf.f_fsid[0] = st->f_fsid.val[0];
+ buf.f_fsid[1] = st->f_fsid.val[1];
+ if (copy_to_user(p, &buf, sizeof(buf)))
+ return -EFAULT;
return 0;
}
@@ -212,35 +207,19 @@ static int do_statfs_hpux(struct path *path, struct hpux_statfs *buf)
asmlinkage long hpux_statfs(const char __user *pathname,
struct hpux_statfs __user *buf)
{
- struct path path;
- int error;
-
- error = user_path(pathname, &path);
- if (!error) {
- struct hpux_statfs tmp;
- error = do_statfs_hpux(&path, &tmp);
- if (!error && copy_to_user(buf, &tmp, sizeof(tmp)))
- error = -EFAULT;
- path_put(&path);
- }
+ struct kstatfs st;
+ int error = user_statfs(pathname, &st);
+ if (!error)
+ error = do_statfs_hpux(&st, buf);
return error;
}
asmlinkage long hpux_fstatfs(unsigned int fd, struct hpux_statfs __user * buf)
{
- struct file *file;
- struct hpux_statfs tmp;
- int error;
-
- error = -EBADF;
- file = fget(fd);
- if (!file)
- goto out;
- error = do_statfs_hpux(&file->f_path, &tmp);
- if (!error && copy_to_user(buf, &tmp, sizeof(tmp)))
- error = -EFAULT;
- fput(file);
- out:
+ struct kstatfs st;
+ int error = fd_statfs(fd, &st);
+ if (!error)
+ error = do_statfs_hpux(&st, buf);
return error;
}
diff --git a/trunk/arch/parisc/include/asm/fcntl.h b/trunk/arch/parisc/include/asm/fcntl.h
index f357fc693c89..0304b92ccfea 100644
--- a/trunk/arch/parisc/include/asm/fcntl.h
+++ b/trunk/arch/parisc/include/asm/fcntl.h
@@ -19,6 +19,8 @@
#define O_NOFOLLOW 000000200 /* don't follow links */
#define O_INVISIBLE 004000000 /* invisible I/O, for DMAPI/XDSM */
+#define O_PATH 020000000
+
#define F_GETLK64 8
#define F_SETLK64 9
#define F_SETLKW64 10
diff --git a/trunk/arch/parisc/include/asm/futex.h b/trunk/arch/parisc/include/asm/futex.h
index 0c705c3a55ef..67a33cc27ef2 100644
--- a/trunk/arch/parisc/include/asm/futex.h
+++ b/trunk/arch/parisc/include/asm/futex.h
@@ -8,7 +8,7 @@
#include
static inline int
-futex_atomic_op_inuser (int encoded_op, int __user *uaddr)
+futex_atomic_op_inuser (int encoded_op, u32 __user *uaddr)
{
int op = (encoded_op >> 28) & 7;
int cmp = (encoded_op >> 24) & 15;
@@ -18,7 +18,7 @@ futex_atomic_op_inuser (int encoded_op, int __user *uaddr)
if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28))
oparg = 1 << oparg;
- if (! access_ok (VERIFY_WRITE, uaddr, sizeof(int)))
+ if (! access_ok (VERIFY_WRITE, uaddr, sizeof(u32)))
return -EFAULT;
pagefault_disable();
@@ -51,10 +51,10 @@ futex_atomic_op_inuser (int encoded_op, int __user *uaddr)
/* Non-atomic version */
static inline int
-futex_atomic_cmpxchg_inatomic(int __user *uaddr, int oldval, int newval)
+futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
+ u32 oldval, u32 newval)
{
- int err = 0;
- int uval;
+ u32 val;
/* futex.c wants to do a cmpxchg_inatomic on kernel NULL, which is
* our gateway page, and causes no end of trouble...
@@ -62,15 +62,15 @@ futex_atomic_cmpxchg_inatomic(int __user *uaddr, int oldval, int newval)
if (segment_eq(KERNEL_DS, get_fs()) && !uaddr)
return -EFAULT;
- if (!access_ok(VERIFY_WRITE, uaddr, sizeof(int)))
+ if (!access_ok(VERIFY_WRITE, uaddr, sizeof(u32)))
return -EFAULT;
- err = get_user(uval, uaddr);
- if (err) return -EFAULT;
- if (uval == oldval)
- err = put_user(newval, uaddr);
- if (err) return -EFAULT;
- return uval;
+ if (get_user(val, uaddr))
+ return -EFAULT;
+ if (val == oldval && put_user(newval, uaddr))
+ return -EFAULT;
+ *uval = val;
+ return 0;
}
#endif /*__KERNEL__*/
diff --git a/trunk/arch/parisc/kernel/time.c b/trunk/arch/parisc/kernel/time.c
index 05511ccb61d2..45b7389d77aa 100644
--- a/trunk/arch/parisc/kernel/time.c
+++ b/trunk/arch/parisc/kernel/time.c
@@ -162,11 +162,8 @@ irqreturn_t __irq_entry timer_interrupt(int irq, void *dev_id)
update_process_times(user_mode(get_irq_regs()));
}
- if (cpu == 0) {
- write_seqlock(&xtime_lock);
- do_timer(ticks_elapsed);
- write_sequnlock(&xtime_lock);
- }
+ if (cpu == 0)
+ xtime_update(ticks_elapsed);
return IRQ_HANDLED;
}
diff --git a/trunk/arch/parisc/kernel/vmlinux.lds.S b/trunk/arch/parisc/kernel/vmlinux.lds.S
index d64a6bbec2aa..8f1e4efd143e 100644
--- a/trunk/arch/parisc/kernel/vmlinux.lds.S
+++ b/trunk/arch/parisc/kernel/vmlinux.lds.S
@@ -145,7 +145,7 @@ SECTIONS
EXIT_DATA
}
- PERCPU(PAGE_SIZE)
+ PERCPU(L1_CACHE_BYTES, PAGE_SIZE)
. = ALIGN(PAGE_SIZE);
__init_end = .;
/* freed after init ends here */
diff --git a/trunk/arch/powerpc/include/asm/futex.h b/trunk/arch/powerpc/include/asm/futex.h
index 7c589ef81fb0..c94e4a3fe2ef 100644
--- a/trunk/arch/powerpc/include/asm/futex.h
+++ b/trunk/arch/powerpc/include/asm/futex.h
@@ -30,7 +30,7 @@
: "b" (uaddr), "i" (-EFAULT), "r" (oparg) \
: "cr0", "memory")
-static inline int futex_atomic_op_inuser (int encoded_op, int __user *uaddr)
+static inline int futex_atomic_op_inuser (int encoded_op, u32 __user *uaddr)
{
int op = (encoded_op >> 28) & 7;
int cmp = (encoded_op >> 24) & 15;
@@ -40,7 +40,7 @@ static inline int futex_atomic_op_inuser (int encoded_op, int __user *uaddr)
if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28))
oparg = 1 << oparg;
- if (! access_ok (VERIFY_WRITE, uaddr, sizeof(int)))
+ if (! access_ok (VERIFY_WRITE, uaddr, sizeof(u32)))
return -EFAULT;
pagefault_disable();
@@ -82,35 +82,38 @@ static inline int futex_atomic_op_inuser (int encoded_op, int __user *uaddr)
}
static inline int
-futex_atomic_cmpxchg_inatomic(int __user *uaddr, int oldval, int newval)
+futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
+ u32 oldval, u32 newval)
{
- int prev;
+ int ret = 0;
+ u32 prev;
- if (!access_ok(VERIFY_WRITE, uaddr, sizeof(int)))
+ if (!access_ok(VERIFY_WRITE, uaddr, sizeof(u32)))
return -EFAULT;
__asm__ __volatile__ (
PPC_RELEASE_BARRIER
-"1: lwarx %0,0,%2 # futex_atomic_cmpxchg_inatomic\n\
- cmpw 0,%0,%3\n\
+"1: lwarx %1,0,%3 # futex_atomic_cmpxchg_inatomic\n\
+ cmpw 0,%1,%4\n\
bne- 3f\n"
- PPC405_ERR77(0,%2)
-"2: stwcx. %4,0,%2\n\
+ PPC405_ERR77(0,%3)
+"2: stwcx. %5,0,%3\n\
bne- 1b\n"
PPC_ACQUIRE_BARRIER
"3: .section .fixup,\"ax\"\n\
-4: li %0,%5\n\
+4: li %0,%6\n\
b 3b\n\
.previous\n\
.section __ex_table,\"a\"\n\
.align 3\n\
" PPC_LONG "1b,4b,2b,4b\n\
.previous" \
- : "=&r" (prev), "+m" (*uaddr)
+ : "+r" (ret), "=&r" (prev), "+m" (*uaddr)
: "r" (uaddr), "r" (oldval), "r" (newval), "i" (-EFAULT)
: "cc", "memory");
- return prev;
+ *uval = prev;
+ return ret;
}
#endif /* __KERNEL__ */
diff --git a/trunk/arch/powerpc/include/asm/lppaca.h b/trunk/arch/powerpc/include/asm/lppaca.h
index 380d48bacd16..26b8c807f8f1 100644
--- a/trunk/arch/powerpc/include/asm/lppaca.h
+++ b/trunk/arch/powerpc/include/asm/lppaca.h
@@ -33,9 +33,25 @@
//
//----------------------------------------------------------------------------
#include
+#include
#include
#include
+/*
+ * We only have to have statically allocated lppaca structs on
+ * legacy iSeries, which supports at most 64 cpus.
+ */
+#ifdef CONFIG_PPC_ISERIES
+#if NR_CPUS < 64
+#define NR_LPPACAS NR_CPUS
+#else
+#define NR_LPPACAS 64
+#endif
+#else /* not iSeries */
+#define NR_LPPACAS 1
+#endif
+
+
/* The Hypervisor barfs if the lppaca crosses a page boundary. A 1k
* alignment is sufficient to prevent this */
struct lppaca {
diff --git a/trunk/arch/powerpc/include/asm/machdep.h b/trunk/arch/powerpc/include/asm/machdep.h
index 991d5998d6be..fe56a23e1ff0 100644
--- a/trunk/arch/powerpc/include/asm/machdep.h
+++ b/trunk/arch/powerpc/include/asm/machdep.h
@@ -240,6 +240,12 @@ struct machdep_calls {
* claims to support kexec.
*/
int (*machine_kexec_prepare)(struct kimage *image);
+
+ /* Called to perform the _real_ kexec.
+ * Do NOT allocate memory or fail here. We are past the point of
+ * no return.
+ */
+ void (*machine_kexec)(struct kimage *image);
#endif /* CONFIG_KEXEC */
#ifdef CONFIG_SUSPEND
diff --git a/trunk/arch/powerpc/include/asm/pci-bridge.h b/trunk/arch/powerpc/include/asm/pci-bridge.h
index 51e9e6f90d12..edeb80fdd2c3 100644
--- a/trunk/arch/powerpc/include/asm/pci-bridge.h
+++ b/trunk/arch/powerpc/include/asm/pci-bridge.h
@@ -171,6 +171,16 @@ static inline struct pci_controller *pci_bus_to_host(const struct pci_bus *bus)
return bus->sysdata;
}
+static inline struct device_node *pci_bus_to_OF_node(struct pci_bus *bus)
+{
+ struct pci_controller *host;
+
+ if (bus->self)
+ return pci_device_to_OF_node(bus->self);
+ host = pci_bus_to_host(bus);
+ return host ? host->dn : NULL;
+}
+
static inline int isa_vaddr_is_ioport(void __iomem *address)
{
/* No specific ISA handling on ppc32 at this stage, it
diff --git a/trunk/arch/powerpc/include/asm/prom.h b/trunk/arch/powerpc/include/asm/prom.h
index d72757585595..c189aa5fe1f4 100644
--- a/trunk/arch/powerpc/include/asm/prom.h
+++ b/trunk/arch/powerpc/include/asm/prom.h
@@ -70,21 +70,6 @@ static inline int of_node_to_nid(struct device_node *device) { return 0; }
#endif
#define of_node_to_nid of_node_to_nid
-/**
- * of_irq_map_pci - Resolve the interrupt for a PCI device
- * @pdev: the device whose interrupt is to be resolved
- * @out_irq: structure of_irq filled by this function
- *
- * This function resolves the PCI interrupt for a given PCI device. If a
- * device-node exists for a given pci_dev, it will use normal OF tree
- * walking. If not, it will implement standard swizzling and walk up the
- * PCI tree until an device-node is found, at which point it will finish
- * resolving using the OF tree walking.
- */
-struct pci_dev;
-struct of_irq;
-extern int of_irq_map_pci(struct pci_dev *pdev, struct of_irq *out_irq);
-
extern void of_instantiate_rtc(void);
/* These includes are put at the bottom because they may contain things
diff --git a/trunk/arch/powerpc/include/asm/rwsem.h b/trunk/arch/powerpc/include/asm/rwsem.h
index 8447d89fbe72..bb1e2cdeb9bf 100644
--- a/trunk/arch/powerpc/include/asm/rwsem.h
+++ b/trunk/arch/powerpc/include/asm/rwsem.h
@@ -13,11 +13,6 @@
* by Paul Mackerras .
*/
-#include
-#include
-#include
-#include
-
/*
* the semaphore definition
*/
@@ -33,47 +28,6 @@
#define RWSEM_ACTIVE_READ_BIAS RWSEM_ACTIVE_BIAS
#define RWSEM_ACTIVE_WRITE_BIAS (RWSEM_WAITING_BIAS + RWSEM_ACTIVE_BIAS)
-struct rw_semaphore {
- long count;
- spinlock_t wait_lock;
- struct list_head wait_list;
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
- struct lockdep_map dep_map;
-#endif
-};
-
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
-# define __RWSEM_DEP_MAP_INIT(lockname) , .dep_map = { .name = #lockname }
-#else
-# define __RWSEM_DEP_MAP_INIT(lockname)
-#endif
-
-#define __RWSEM_INITIALIZER(name) \
-{ \
- RWSEM_UNLOCKED_VALUE, \
- __SPIN_LOCK_UNLOCKED((name).wait_lock), \
- LIST_HEAD_INIT((name).wait_list) \
- __RWSEM_DEP_MAP_INIT(name) \
-}
-
-#define DECLARE_RWSEM(name) \
- struct rw_semaphore name = __RWSEM_INITIALIZER(name)
-
-extern struct rw_semaphore *rwsem_down_read_failed(struct rw_semaphore *sem);
-extern struct rw_semaphore *rwsem_down_write_failed(struct rw_semaphore *sem);
-extern struct rw_semaphore *rwsem_wake(struct rw_semaphore *sem);
-extern struct rw_semaphore *rwsem_downgrade_wake(struct rw_semaphore *sem);
-
-extern void __init_rwsem(struct rw_semaphore *sem, const char *name,
- struct lock_class_key *key);
-
-#define init_rwsem(sem) \
- do { \
- static struct lock_class_key __key; \
- \
- __init_rwsem((sem), #sem, &__key); \
- } while (0)
-
/*
* lock for reading
*/
@@ -174,10 +128,5 @@ static inline long rwsem_atomic_update(long delta, struct rw_semaphore *sem)
return atomic_long_add_return(delta, (atomic_long_t *)&sem->count);
}
-static inline int rwsem_is_locked(struct rw_semaphore *sem)
-{
- return sem->count != 0;
-}
-
#endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_RWSEM_H */
diff --git a/trunk/arch/powerpc/kernel/machine_kexec.c b/trunk/arch/powerpc/kernel/machine_kexec.c
index 49a170af8145..a5f8672eeff3 100644
--- a/trunk/arch/powerpc/kernel/machine_kexec.c
+++ b/trunk/arch/powerpc/kernel/machine_kexec.c
@@ -87,7 +87,10 @@ void machine_kexec(struct kimage *image)
save_ftrace_enabled = __ftrace_enabled_save();
- default_machine_kexec(image);
+ if (ppc_md.machine_kexec)
+ ppc_md.machine_kexec(image);
+ else
+ default_machine_kexec(image);
__ftrace_enabled_restore(save_ftrace_enabled);
diff --git a/trunk/arch/powerpc/kernel/paca.c b/trunk/arch/powerpc/kernel/paca.c
index ebf9846f3c3b..f4adf89d7614 100644
--- a/trunk/arch/powerpc/kernel/paca.c
+++ b/trunk/arch/powerpc/kernel/paca.c
@@ -26,20 +26,6 @@ extern unsigned long __toc_start;
#ifdef CONFIG_PPC_BOOK3S
-/*
- * We only have to have statically allocated lppaca structs on
- * legacy iSeries, which supports at most 64 cpus.
- */
-#ifdef CONFIG_PPC_ISERIES
-#if NR_CPUS < 64
-#define NR_LPPACAS NR_CPUS
-#else
-#define NR_LPPACAS 64
-#endif
-#else /* not iSeries */
-#define NR_LPPACAS 1
-#endif
-
/*
* The structure which the hypervisor knows about - this structure
* should not cross a page boundary. The vpa_init/register_vpa call
diff --git a/trunk/arch/powerpc/kernel/pci-common.c b/trunk/arch/powerpc/kernel/pci-common.c
index 10a44e68ef11..eb341be9a4d9 100644
--- a/trunk/arch/powerpc/kernel/pci-common.c
+++ b/trunk/arch/powerpc/kernel/pci-common.c
@@ -22,6 +22,7 @@
#include
#include
#include
+#include
#include
#include
#include
diff --git a/trunk/arch/powerpc/kernel/process.c b/trunk/arch/powerpc/kernel/process.c
index 7a1d5cb76932..8303a6c65ef7 100644
--- a/trunk/arch/powerpc/kernel/process.c
+++ b/trunk/arch/powerpc/kernel/process.c
@@ -353,6 +353,7 @@ static void switch_booke_debug_regs(struct thread_struct *new_thread)
prime_debug_regs(new_thread);
}
#else /* !CONFIG_PPC_ADV_DEBUG_REGS */
+#ifndef CONFIG_HAVE_HW_BREAKPOINT
static void set_debug_reg_defaults(struct thread_struct *thread)
{
if (thread->dabr) {
@@ -360,6 +361,7 @@ static void set_debug_reg_defaults(struct thread_struct *thread)
set_dabr(0);
}
}
+#endif /* !CONFIG_HAVE_HW_BREAKPOINT */
#endif /* CONFIG_PPC_ADV_DEBUG_REGS */
int set_dabr(unsigned long dabr)
@@ -670,11 +672,11 @@ void flush_thread(void)
{
discard_lazy_cpu_state();
-#ifdef CONFIG_HAVE_HW_BREAKPOINTS
+#ifdef CONFIG_HAVE_HW_BREAKPOINT
flush_ptrace_hw_breakpoint(current);
-#else /* CONFIG_HAVE_HW_BREAKPOINTS */
+#else /* CONFIG_HAVE_HW_BREAKPOINT */
set_debug_reg_defaults(¤t->thread);
-#endif /* CONFIG_HAVE_HW_BREAKPOINTS */
+#endif /* CONFIG_HAVE_HW_BREAKPOINT */
}
void
diff --git a/trunk/arch/powerpc/kernel/prom_parse.c b/trunk/arch/powerpc/kernel/prom_parse.c
index c2b7a07cc3d3..47187cc2cf00 100644
--- a/trunk/arch/powerpc/kernel/prom_parse.c
+++ b/trunk/arch/powerpc/kernel/prom_parse.c
@@ -2,95 +2,11 @@
#include
#include
-#include
#include
#include
#include
#include
#include
-#include
-
-#ifdef CONFIG_PCI
-int of_irq_map_pci(struct pci_dev *pdev, struct of_irq *out_irq)
-{
- struct device_node *dn, *ppnode;
- struct pci_dev *ppdev;
- u32 lspec;
- u32 laddr[3];
- u8 pin;
- int rc;
-
- /* Check if we have a device node, if yes, fallback to standard OF
- * parsing
- */
- dn = pci_device_to_OF_node(pdev);
- if (dn) {
- rc = of_irq_map_one(dn, 0, out_irq);
- if (!rc)
- return rc;
- }
-
- /* Ok, we don't, time to have fun. Let's start by building up an
- * interrupt spec. we assume #interrupt-cells is 1, which is standard
- * for PCI. If you do different, then don't use that routine.
- */
- rc = pci_read_config_byte(pdev, PCI_INTERRUPT_PIN, &pin);
- if (rc != 0)
- return rc;
- /* No pin, exit */
- if (pin == 0)
- return -ENODEV;
-
- /* Now we walk up the PCI tree */
- lspec = pin;
- for (;;) {
- /* Get the pci_dev of our parent */
- ppdev = pdev->bus->self;
-
- /* Ouch, it's a host bridge... */
- if (ppdev == NULL) {
-#ifdef CONFIG_PPC64
- ppnode = pci_bus_to_OF_node(pdev->bus);
-#else
- struct pci_controller *host;
- host = pci_bus_to_host(pdev->bus);
- ppnode = host ? host->dn : NULL;
-#endif
- /* No node for host bridge ? give up */
- if (ppnode == NULL)
- return -EINVAL;
- } else
- /* We found a P2P bridge, check if it has a node */
- ppnode = pci_device_to_OF_node(ppdev);
-
- /* Ok, we have found a parent with a device-node, hand over to
- * the OF parsing code.
- * We build a unit address from the linux device to be used for
- * resolution. Note that we use the linux bus number which may
- * not match your firmware bus numbering.
- * Fortunately, in most cases, interrupt-map-mask doesn't include
- * the bus number as part of the matching.
- * You should still be careful about that though if you intend
- * to rely on this function (you ship a firmware that doesn't
- * create device nodes for all PCI devices).
- */
- if (ppnode)
- break;
-
- /* We can only get here if we hit a P2P bridge with no node,
- * let's do standard swizzling and try again
- */
- lspec = pci_swizzle_interrupt_pin(pdev, lspec);
- pdev = ppdev;
- }
-
- laddr[0] = (pdev->bus->number << 16)
- | (pdev->devfn << 8);
- laddr[1] = laddr[2] = 0;
- return of_irq_map_raw(ppnode, &lspec, 1, laddr, out_irq);
-}
-EXPORT_SYMBOL_GPL(of_irq_map_pci);
-#endif /* CONFIG_PCI */
void of_parse_dma_window(struct device_node *dn, const void *dma_window_prop,
unsigned long *busno, unsigned long *phys, unsigned long *size)
diff --git a/trunk/arch/powerpc/kernel/vmlinux.lds.S b/trunk/arch/powerpc/kernel/vmlinux.lds.S
index 8a0deefac08d..b9150f07d266 100644
--- a/trunk/arch/powerpc/kernel/vmlinux.lds.S
+++ b/trunk/arch/powerpc/kernel/vmlinux.lds.S
@@ -160,7 +160,7 @@ SECTIONS
INIT_RAM_FS
}
- PERCPU(PAGE_SIZE)
+ PERCPU(L1_CACHE_BYTES, PAGE_SIZE)
. = ALIGN(8);
.machine.desc : AT(ADDR(.machine.desc) - LOAD_OFFSET) {
diff --git a/trunk/arch/powerpc/mm/numa.c b/trunk/arch/powerpc/mm/numa.c
index fd4812329570..0dc95c0aa3be 100644
--- a/trunk/arch/powerpc/mm/numa.c
+++ b/trunk/arch/powerpc/mm/numa.c
@@ -1516,7 +1516,8 @@ int start_topology_update(void)
{
int rc = 0;
- if (firmware_has_feature(FW_FEATURE_VPHN) &&
+ /* Disabled until races with load balancing are fixed */
+ if (0 && firmware_has_feature(FW_FEATURE_VPHN) &&
get_lppaca()->shared_proc) {
vphn_enabled = 1;
setup_cpu_associativity_change_counters();
diff --git a/trunk/arch/powerpc/mm/tlb_hash64.c b/trunk/arch/powerpc/mm/tlb_hash64.c
index 1ec06576f619..c14d09f614f3 100644
--- a/trunk/arch/powerpc/mm/tlb_hash64.c
+++ b/trunk/arch/powerpc/mm/tlb_hash64.c
@@ -38,13 +38,11 @@ DEFINE_PER_CPU(struct ppc64_tlb_batch, ppc64_tlb_batch);
* neesd to be flushed. This function will either perform the flush
* immediately or will batch it up if the current CPU has an active
* batch on it.
- *
- * Must be called from within some kind of spinlock/non-preempt region...
*/
void hpte_need_flush(struct mm_struct *mm, unsigned long addr,
pte_t *ptep, unsigned long pte, int huge)
{
- struct ppc64_tlb_batch *batch = &__get_cpu_var(ppc64_tlb_batch);
+ struct ppc64_tlb_batch *batch = &get_cpu_var(ppc64_tlb_batch);
unsigned long vsid, vaddr;
unsigned int psize;
int ssize;
@@ -99,6 +97,7 @@ void hpte_need_flush(struct mm_struct *mm, unsigned long addr,
*/
if (!batch->active) {
flush_hash_page(vaddr, rpte, psize, ssize, 0);
+ put_cpu_var(ppc64_tlb_batch);
return;
}
@@ -127,6 +126,7 @@ void hpte_need_flush(struct mm_struct *mm, unsigned long addr,
batch->index = ++i;
if (i >= PPC64_TLB_BATCH_NR)
__flush_tlb_pending(batch);
+ put_cpu_var(ppc64_tlb_batch);
}
/*
diff --git a/trunk/arch/powerpc/platforms/cell/spufs/syscalls.c b/trunk/arch/powerpc/platforms/cell/spufs/syscalls.c
index 187a7d32f86a..a3d2ce54ea2e 100644
--- a/trunk/arch/powerpc/platforms/cell/spufs/syscalls.c
+++ b/trunk/arch/powerpc/platforms/cell/spufs/syscalls.c
@@ -70,7 +70,7 @@ static long do_spu_create(const char __user *pathname, unsigned int flags,
if (!IS_ERR(tmp)) {
struct nameidata nd;
- ret = path_lookup(tmp, LOOKUP_PARENT, &nd);
+ ret = kern_path_parent(tmp, &nd);
if (!ret) {
nd.flags |= LOOKUP_OPEN | LOOKUP_CREATE;
ret = spufs_create(&nd, flags, mode, neighbor);
diff --git a/trunk/arch/powerpc/platforms/iseries/dt.c b/trunk/arch/powerpc/platforms/iseries/dt.c
index fdb7384c0c4f..f0491cc28900 100644
--- a/trunk/arch/powerpc/platforms/iseries/dt.c
+++ b/trunk/arch/powerpc/platforms/iseries/dt.c
@@ -242,8 +242,8 @@ static void __init dt_cpus(struct iseries_flat_dt *dt)
pft_size[0] = 0; /* NUMA CEC cookie, 0 for non NUMA */
pft_size[1] = __ilog2(HvCallHpt_getHptPages() * HW_PAGE_SIZE);
- for (i = 0; i < NR_CPUS; i++) {
- if (lppaca_of(i).dyn_proc_status >= 2)
+ for (i = 0; i < NR_LPPACAS; i++) {
+ if (lppaca[i].dyn_proc_status >= 2)
continue;
snprintf(p, 32 - (p - buf), "@%d", i);
@@ -251,7 +251,7 @@ static void __init dt_cpus(struct iseries_flat_dt *dt)
dt_prop_str(dt, "device_type", device_type_cpu);
- index = lppaca_of(i).dyn_hv_phys_proc_index;
+ index = lppaca[i].dyn_hv_phys_proc_index;
d = &xIoHriProcessorVpd[index];
dt_prop_u32(dt, "i-cache-size", d->xInstCacheSize * 1024);
diff --git a/trunk/arch/powerpc/platforms/iseries/setup.c b/trunk/arch/powerpc/platforms/iseries/setup.c
index b0863410517f..2946ae10fbfd 100644
--- a/trunk/arch/powerpc/platforms/iseries/setup.c
+++ b/trunk/arch/powerpc/platforms/iseries/setup.c
@@ -680,6 +680,7 @@ void * __init iSeries_early_setup(void)
* on but calling this function multiple times is fine.
*/
identify_cpu(0, mfspr(SPRN_PVR));
+ initialise_paca(&boot_paca, 0);
powerpc_firmware_features |= FW_FEATURE_ISERIES;
powerpc_firmware_features |= FW_FEATURE_LPAR;
diff --git a/trunk/arch/s390/boot/compressed/misc.c b/trunk/arch/s390/boot/compressed/misc.c
index 0851eb1e919e..2751b3a8a66f 100644
--- a/trunk/arch/s390/boot/compressed/misc.c
+++ b/trunk/arch/s390/boot/compressed/misc.c
@@ -133,11 +133,12 @@ unsigned long decompress_kernel(void)
unsigned long output_addr;
unsigned char *output;
- check_ipl_parmblock((void *) 0, (unsigned long) output + SZ__bss_start);
+ output_addr = ((unsigned long) &_end + HEAP_SIZE + 4095UL) & -4096UL;
+ check_ipl_parmblock((void *) 0, output_addr + SZ__bss_start);
memset(&_bss, 0, &_ebss - &_bss);
free_mem_ptr = (unsigned long)&_end;
free_mem_end_ptr = free_mem_ptr + HEAP_SIZE;
- output = (unsigned char *) ((free_mem_end_ptr + 4095UL) & -4096UL);
+ output = (unsigned char *) output_addr;
#ifdef CONFIG_BLK_DEV_INITRD
/*
diff --git a/trunk/arch/s390/crypto/sha_common.c b/trunk/arch/s390/crypto/sha_common.c
index f42dbabc0d30..48884f89ab92 100644
--- a/trunk/arch/s390/crypto/sha_common.c
+++ b/trunk/arch/s390/crypto/sha_common.c
@@ -38,6 +38,7 @@ int s390_sha_update(struct shash_desc *desc, const u8 *data, unsigned int len)
BUG_ON(ret != bsize);
data += bsize - index;
len -= bsize - index;
+ index = 0;
}
/* process as many blocks as possible */
diff --git a/trunk/arch/s390/include/asm/atomic.h b/trunk/arch/s390/include/asm/atomic.h
index 76daea117181..5c5ba10384c2 100644
--- a/trunk/arch/s390/include/asm/atomic.h
+++ b/trunk/arch/s390/include/asm/atomic.h
@@ -36,14 +36,19 @@
static inline int atomic_read(const atomic_t *v)
{
- barrier();
- return v->counter;
+ int c;
+
+ asm volatile(
+ " l %0,%1\n"
+ : "=d" (c) : "Q" (v->counter));
+ return c;
}
static inline void atomic_set(atomic_t *v, int i)
{
- v->counter = i;
- barrier();
+ asm volatile(
+ " st %1,%0\n"
+ : "=Q" (v->counter) : "d" (i));
}
static inline int atomic_add_return(int i, atomic_t *v)
@@ -128,14 +133,19 @@ static inline int atomic_add_unless(atomic_t *v, int a, int u)
static inline long long atomic64_read(const atomic64_t *v)
{
- barrier();
- return v->counter;
+ long long c;
+
+ asm volatile(
+ " lg %0,%1\n"
+ : "=d" (c) : "Q" (v->counter));
+ return c;
}
static inline void atomic64_set(atomic64_t *v, long long i)
{
- v->counter = i;
- barrier();
+ asm volatile(
+ " stg %1,%0\n"
+ : "=Q" (v->counter) : "d" (i));
}
static inline long long atomic64_add_return(long long i, atomic64_t *v)
diff --git a/trunk/arch/s390/include/asm/cache.h b/trunk/arch/s390/include/asm/cache.h
index 24aafa68b643..2a30d5ac0667 100644
--- a/trunk/arch/s390/include/asm/cache.h
+++ b/trunk/arch/s390/include/asm/cache.h
@@ -13,6 +13,7 @@
#define L1_CACHE_BYTES 256
#define L1_CACHE_SHIFT 8
+#define NET_SKB_PAD 32
#define __read_mostly __attribute__((__section__(".data..read_mostly")))
diff --git a/trunk/arch/s390/include/asm/futex.h b/trunk/arch/s390/include/asm/futex.h
index 5c5d02de49e9..81cf36b691f1 100644
--- a/trunk/arch/s390/include/asm/futex.h
+++ b/trunk/arch/s390/include/asm/futex.h
@@ -7,7 +7,7 @@
#include
#include
-static inline int futex_atomic_op_inuser (int encoded_op, int __user *uaddr)
+static inline int futex_atomic_op_inuser (int encoded_op, u32 __user *uaddr)
{
int op = (encoded_op >> 28) & 7;
int cmp = (encoded_op >> 24) & 15;
@@ -18,7 +18,7 @@ static inline int futex_atomic_op_inuser (int encoded_op, int __user *uaddr)
if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28))
oparg = 1 << oparg;
- if (! access_ok (VERIFY_WRITE, uaddr, sizeof(int)))
+ if (! access_ok (VERIFY_WRITE, uaddr, sizeof(u32)))
return -EFAULT;
pagefault_disable();
@@ -39,13 +39,13 @@ static inline int futex_atomic_op_inuser (int encoded_op, int __user *uaddr)
return ret;
}
-static inline int futex_atomic_cmpxchg_inatomic(int __user *uaddr,
- int oldval, int newval)
+static inline int futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
+ u32 oldval, u32 newval)
{
- if (! access_ok (VERIFY_WRITE, uaddr, sizeof(int)))
+ if (! access_ok (VERIFY_WRITE, uaddr, sizeof(u32)))
return -EFAULT;
- return uaccess.futex_atomic_cmpxchg(uaddr, oldval, newval);
+ return uaccess.futex_atomic_cmpxchg(uval, uaddr, oldval, newval);
}
#endif /* __KERNEL__ */
diff --git a/trunk/arch/s390/include/asm/rwsem.h b/trunk/arch/s390/include/asm/rwsem.h
index 423fdda2322d..d0eb4653cebd 100644
--- a/trunk/arch/s390/include/asm/rwsem.h
+++ b/trunk/arch/s390/include/asm/rwsem.h
@@ -43,29 +43,6 @@
#ifdef __KERNEL__
-#include
-#include
-
-struct rwsem_waiter;
-
-extern struct rw_semaphore *rwsem_down_read_failed(struct rw_semaphore *);
-extern struct rw_semaphore *rwsem_down_write_failed(struct rw_semaphore *);
-extern struct rw_semaphore *rwsem_wake(struct rw_semaphore *);
-extern struct rw_semaphore *rwsem_downgrade_wake(struct rw_semaphore *);
-extern struct rw_semaphore *rwsem_downgrade_write(struct rw_semaphore *);
-
-/*
- * the semaphore definition
- */
-struct rw_semaphore {
- signed long count;
- spinlock_t wait_lock;
- struct list_head wait_list;
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
- struct lockdep_map dep_map;
-#endif
-};
-
#ifndef __s390x__
#define RWSEM_UNLOCKED_VALUE 0x00000000
#define RWSEM_ACTIVE_BIAS 0x00000001
@@ -80,41 +57,6 @@ struct rw_semaphore {
#define RWSEM_ACTIVE_READ_BIAS RWSEM_ACTIVE_BIAS
#define RWSEM_ACTIVE_WRITE_BIAS (RWSEM_WAITING_BIAS + RWSEM_ACTIVE_BIAS)
-/*
- * initialisation
- */
-
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
-# define __RWSEM_DEP_MAP_INIT(lockname) , .dep_map = { .name = #lockname }
-#else
-# define __RWSEM_DEP_MAP_INIT(lockname)
-#endif
-
-#define __RWSEM_INITIALIZER(name) \
- { RWSEM_UNLOCKED_VALUE, __SPIN_LOCK_UNLOCKED((name).wait.lock), \
- LIST_HEAD_INIT((name).wait_list) __RWSEM_DEP_MAP_INIT(name) }
-
-#define DECLARE_RWSEM(name) \
- struct rw_semaphore name = __RWSEM_INITIALIZER(name)
-
-static inline void init_rwsem(struct rw_semaphore *sem)
-{
- sem->count = RWSEM_UNLOCKED_VALUE;
- spin_lock_init(&sem->wait_lock);
- INIT_LIST_HEAD(&sem->wait_list);
-}
-
-extern void __init_rwsem(struct rw_semaphore *sem, const char *name,
- struct lock_class_key *key);
-
-#define init_rwsem(sem) \
-do { \
- static struct lock_class_key __key; \
- \
- __init_rwsem((sem), #sem, &__key); \
-} while (0)
-
-
/*
* lock for reading
*/
@@ -377,10 +319,5 @@ static inline long rwsem_atomic_update(long delta, struct rw_semaphore *sem)
return new;
}
-static inline int rwsem_is_locked(struct rw_semaphore *sem)
-{
- return (sem->count != 0);
-}
-
#endif /* __KERNEL__ */
#endif /* _S390_RWSEM_H */
diff --git a/trunk/arch/s390/include/asm/uaccess.h b/trunk/arch/s390/include/asm/uaccess.h
index d6b1ed0ec52b..2d9ea11f919a 100644
--- a/trunk/arch/s390/include/asm/uaccess.h
+++ b/trunk/arch/s390/include/asm/uaccess.h
@@ -83,8 +83,8 @@ struct uaccess_ops {
size_t (*clear_user)(size_t, void __user *);
size_t (*strnlen_user)(size_t, const char __user *);
size_t (*strncpy_from_user)(size_t, const char __user *, char *);
- int (*futex_atomic_op)(int op, int __user *, int oparg, int *old);
- int (*futex_atomic_cmpxchg)(int __user *, int old, int new);
+ int (*futex_atomic_op)(int op, u32 __user *, int oparg, int *old);
+ int (*futex_atomic_cmpxchg)(u32 *, u32 __user *, u32 old, u32 new);
};
extern struct uaccess_ops uaccess;
diff --git a/trunk/arch/s390/kernel/vmlinux.lds.S b/trunk/arch/s390/kernel/vmlinux.lds.S
index a68ac10213b2..1bc18cdb525b 100644
--- a/trunk/arch/s390/kernel/vmlinux.lds.S
+++ b/trunk/arch/s390/kernel/vmlinux.lds.S
@@ -77,7 +77,7 @@ SECTIONS
. = ALIGN(PAGE_SIZE);
INIT_DATA_SECTION(0x100)
- PERCPU(PAGE_SIZE)
+ PERCPU(0x100, PAGE_SIZE)
. = ALIGN(PAGE_SIZE);
__init_end = .; /* freed after init ends here */
diff --git a/trunk/arch/s390/lib/uaccess.h b/trunk/arch/s390/lib/uaccess.h
index 126011df14f1..1d2536cb630b 100644
--- a/trunk/arch/s390/lib/uaccess.h
+++ b/trunk/arch/s390/lib/uaccess.h
@@ -12,12 +12,12 @@ extern size_t copy_from_user_std(size_t, const void __user *, void *);
extern size_t copy_to_user_std(size_t, void __user *, const void *);
extern size_t strnlen_user_std(size_t, const char __user *);
extern size_t strncpy_from_user_std(size_t, const char __user *, char *);
-extern int futex_atomic_cmpxchg_std(int __user *, int, int);
-extern int futex_atomic_op_std(int, int __user *, int, int *);
+extern int futex_atomic_cmpxchg_std(u32 *, u32 __user *, u32, u32);
+extern int futex_atomic_op_std(int, u32 __user *, int, int *);
extern size_t copy_from_user_pt(size_t, const void __user *, void *);
extern size_t copy_to_user_pt(size_t, void __user *, const void *);
-extern int futex_atomic_op_pt(int, int __user *, int, int *);
-extern int futex_atomic_cmpxchg_pt(int __user *, int, int);
+extern int futex_atomic_op_pt(int, u32 __user *, int, int *);
+extern int futex_atomic_cmpxchg_pt(u32 *, u32 __user *, u32, u32);
#endif /* __ARCH_S390_LIB_UACCESS_H */
diff --git a/trunk/arch/s390/lib/uaccess_pt.c b/trunk/arch/s390/lib/uaccess_pt.c
index 404f2de296dc..74833831417f 100644
--- a/trunk/arch/s390/lib/uaccess_pt.c
+++ b/trunk/arch/s390/lib/uaccess_pt.c
@@ -302,7 +302,7 @@ static size_t copy_in_user_pt(size_t n, void __user *to,
: "0" (-EFAULT), "d" (oparg), "a" (uaddr), \
"m" (*uaddr) : "cc" );
-static int __futex_atomic_op_pt(int op, int __user *uaddr, int oparg, int *old)
+static int __futex_atomic_op_pt(int op, u32 __user *uaddr, int oparg, int *old)
{
int oldval = 0, newval, ret;
@@ -335,7 +335,7 @@ static int __futex_atomic_op_pt(int op, int __user *uaddr, int oparg, int *old)
return ret;
}
-int futex_atomic_op_pt(int op, int __user *uaddr, int oparg, int *old)
+int futex_atomic_op_pt(int op, u32 __user *uaddr, int oparg, int *old)
{
int ret;
@@ -354,26 +354,29 @@ int futex_atomic_op_pt(int op, int __user *uaddr, int oparg, int *old)
return ret;
}
-static int __futex_atomic_cmpxchg_pt(int __user *uaddr, int oldval, int newval)
+static int __futex_atomic_cmpxchg_pt(u32 *uval, u32 __user *uaddr,
+ u32 oldval, u32 newval)
{
int ret;
asm volatile("0: cs %1,%4,0(%5)\n"
- "1: lr %0,%1\n"
+ "1: la %0,0\n"
"2:\n"
EX_TABLE(0b,2b) EX_TABLE(1b,2b)
: "=d" (ret), "+d" (oldval), "=m" (*uaddr)
: "0" (-EFAULT), "d" (newval), "a" (uaddr), "m" (*uaddr)
: "cc", "memory" );
+ *uval = oldval;
return ret;
}
-int futex_atomic_cmpxchg_pt(int __user *uaddr, int oldval, int newval)
+int futex_atomic_cmpxchg_pt(u32 *uval, u32 __user *uaddr,
+ u32 oldval, u32 newval)
{
int ret;
if (segment_eq(get_fs(), KERNEL_DS))
- return __futex_atomic_cmpxchg_pt(uaddr, oldval, newval);
+ return __futex_atomic_cmpxchg_pt(uval, uaddr, oldval, newval);
spin_lock(¤t->mm->page_table_lock);
uaddr = (int __user *) __dat_user_addr((unsigned long) uaddr);
if (!uaddr) {
@@ -382,7 +385,7 @@ int futex_atomic_cmpxchg_pt(int __user *uaddr, int oldval, int newval)
}
get_page(virt_to_page(uaddr));
spin_unlock(¤t->mm->page_table_lock);
- ret = __futex_atomic_cmpxchg_pt(uaddr, oldval, newval);
+ ret = __futex_atomic_cmpxchg_pt(uval, uaddr, oldval, newval);
put_page(virt_to_page(uaddr));
return ret;
}
diff --git a/trunk/arch/s390/lib/uaccess_std.c b/trunk/arch/s390/lib/uaccess_std.c
index a6c4f7ed24a4..bb1a7eed42ce 100644
--- a/trunk/arch/s390/lib/uaccess_std.c
+++ b/trunk/arch/s390/lib/uaccess_std.c
@@ -255,7 +255,7 @@ size_t strncpy_from_user_std(size_t size, const char __user *src, char *dst)
: "0" (-EFAULT), "d" (oparg), "a" (uaddr), \
"m" (*uaddr) : "cc");
-int futex_atomic_op_std(int op, int __user *uaddr, int oparg, int *old)
+int futex_atomic_op_std(int op, u32 __user *uaddr, int oparg, int *old)
{
int oldval = 0, newval, ret;
@@ -287,19 +287,21 @@ int futex_atomic_op_std(int op, int __user *uaddr, int oparg, int *old)
return ret;
}
-int futex_atomic_cmpxchg_std(int __user *uaddr, int oldval, int newval)
+int futex_atomic_cmpxchg_std(u32 *uval, u32 __user *uaddr,
+ u32 oldval, u32 newval)
{
int ret;
asm volatile(
" sacf 256\n"
"0: cs %1,%4,0(%5)\n"
- "1: lr %0,%1\n"
+ "1: la %0,0\n"
"2: sacf 0\n"
EX_TABLE(0b,2b) EX_TABLE(1b,2b)
: "=d" (ret), "+d" (oldval), "=m" (*uaddr)
: "0" (-EFAULT), "d" (newval), "a" (uaddr), "m" (*uaddr)
: "cc", "memory" );
+ *uval = oldval;
return ret;
}
diff --git a/trunk/arch/sh/include/asm/futex-irq.h b/trunk/arch/sh/include/asm/futex-irq.h
index a9f16a7f9aea..6cb9f193a95e 100644
--- a/trunk/arch/sh/include/asm/futex-irq.h
+++ b/trunk/arch/sh/include/asm/futex-irq.h
@@ -3,7 +3,7 @@
#include
-static inline int atomic_futex_op_xchg_set(int oparg, int __user *uaddr,
+static inline int atomic_futex_op_xchg_set(int oparg, u32 __user *uaddr,
int *oldval)
{
unsigned long flags;
@@ -20,7 +20,7 @@ static inline int atomic_futex_op_xchg_set(int oparg, int __user *uaddr,
return ret;
}
-static inline int atomic_futex_op_xchg_add(int oparg, int __user *uaddr,
+static inline int atomic_futex_op_xchg_add(int oparg, u32 __user *uaddr,
int *oldval)
{
unsigned long flags;
@@ -37,7 +37,7 @@ static inline int atomic_futex_op_xchg_add(int oparg, int __user *uaddr,
return ret;
}
-static inline int atomic_futex_op_xchg_or(int oparg, int __user *uaddr,
+static inline int atomic_futex_op_xchg_or(int oparg, u32 __user *uaddr,
int *oldval)
{
unsigned long flags;
@@ -54,7 +54,7 @@ static inline int atomic_futex_op_xchg_or(int oparg, int __user *uaddr,
return ret;
}
-static inline int atomic_futex_op_xchg_and(int oparg, int __user *uaddr,
+static inline int atomic_futex_op_xchg_and(int oparg, u32 __user *uaddr,
int *oldval)
{
unsigned long flags;
@@ -71,7 +71,7 @@ static inline int atomic_futex_op_xchg_and(int oparg, int __user *uaddr,
return ret;
}
-static inline int atomic_futex_op_xchg_xor(int oparg, int __user *uaddr,
+static inline int atomic_futex_op_xchg_xor(int oparg, u32 __user *uaddr,
int *oldval)
{
unsigned long flags;
@@ -88,11 +88,13 @@ static inline int atomic_futex_op_xchg_xor(int oparg, int __user *uaddr,
return ret;
}
-static inline int atomic_futex_op_cmpxchg_inatomic(int __user *uaddr,
- int oldval, int newval)
+static inline int atomic_futex_op_cmpxchg_inatomic(u32 *uval,
+ u32 __user *uaddr,
+ u32 oldval, u32 newval)
{
unsigned long flags;
- int ret, prev = 0;
+ int ret;
+ u32 prev = 0;
local_irq_save(flags);
@@ -102,10 +104,8 @@ static inline int atomic_futex_op_cmpxchg_inatomic(int __user *uaddr,
local_irq_restore(flags);
- if (ret)
- return ret;
-
- return prev;
+ *uval = prev;
+ return ret;
}
#endif /* __ASM_SH_FUTEX_IRQ_H */
diff --git a/trunk/arch/sh/include/asm/futex.h b/trunk/arch/sh/include/asm/futex.h
index 68256ec5fa35..7be39a646fbd 100644
--- a/trunk/arch/sh/include/asm/futex.h
+++ b/trunk/arch/sh/include/asm/futex.h
@@ -10,7 +10,7 @@
/* XXX: UP variants, fix for SH-4A and SMP.. */
#include
-static inline int futex_atomic_op_inuser(int encoded_op, int __user *uaddr)
+static inline int futex_atomic_op_inuser(int encoded_op, u32 __user *uaddr)
{
int op = (encoded_op >> 28) & 7;
int cmp = (encoded_op >> 24) & 15;
@@ -21,7 +21,7 @@ static inline int futex_atomic_op_inuser(int encoded_op, int __user *uaddr)
if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28))
oparg = 1 << oparg;
- if (!access_ok(VERIFY_WRITE, uaddr, sizeof(int)))
+ if (!access_ok(VERIFY_WRITE, uaddr, sizeof(u32)))
return -EFAULT;
pagefault_disable();
@@ -65,12 +65,13 @@ static inline int futex_atomic_op_inuser(int encoded_op, int __user *uaddr)
}
static inline int
-futex_atomic_cmpxchg_inatomic(int __user *uaddr, int oldval, int newval)
+futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
+ u32 oldval, u32 newval)
{
- if (!access_ok(VERIFY_WRITE, uaddr, sizeof(int)))
+ if (!access_ok(VERIFY_WRITE, uaddr, sizeof(u32)))
return -EFAULT;
- return atomic_futex_op_cmpxchg_inatomic(uaddr, oldval, newval);
+ return atomic_futex_op_cmpxchg_inatomic(uval, uaddr, oldval, newval);
}
#endif /* __KERNEL__ */
diff --git a/trunk/arch/sh/include/asm/rwsem.h b/trunk/arch/sh/include/asm/rwsem.h
index 06e2251a5e48..edab57265293 100644
--- a/trunk/arch/sh/include/asm/rwsem.h
+++ b/trunk/arch/sh/include/asm/rwsem.h
@@ -11,64 +11,13 @@
#endif
#ifdef __KERNEL__
-#include
-#include
-#include
-#include
-/*
- * the semaphore definition
- */
-struct rw_semaphore {
- long count;
#define RWSEM_UNLOCKED_VALUE 0x00000000
#define RWSEM_ACTIVE_BIAS 0x00000001
#define RWSEM_ACTIVE_MASK 0x0000ffff
#define RWSEM_WAITING_BIAS (-0x00010000)
#define RWSEM_ACTIVE_READ_BIAS RWSEM_ACTIVE_BIAS
#define RWSEM_ACTIVE_WRITE_BIAS (RWSEM_WAITING_BIAS + RWSEM_ACTIVE_BIAS)
- spinlock_t wait_lock;
- struct list_head wait_list;
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
- struct lockdep_map dep_map;
-#endif
-};
-
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
-# define __RWSEM_DEP_MAP_INIT(lockname) , .dep_map = { .name = #lockname }
-#else
-# define __RWSEM_DEP_MAP_INIT(lockname)
-#endif
-
-#define __RWSEM_INITIALIZER(name) \
- { RWSEM_UNLOCKED_VALUE, __SPIN_LOCK_UNLOCKED((name).wait_lock), \
- LIST_HEAD_INIT((name).wait_list) \
- __RWSEM_DEP_MAP_INIT(name) }
-
-#define DECLARE_RWSEM(name) \
- struct rw_semaphore name = __RWSEM_INITIALIZER(name)
-
-extern struct rw_semaphore *rwsem_down_read_failed(struct rw_semaphore *sem);
-extern struct rw_semaphore *rwsem_down_write_failed(struct rw_semaphore *sem);
-extern struct rw_semaphore *rwsem_wake(struct rw_semaphore *sem);
-extern struct rw_semaphore *rwsem_downgrade_wake(struct rw_semaphore *sem);
-
-extern void __init_rwsem(struct rw_semaphore *sem, const char *name,
- struct lock_class_key *key);
-
-#define init_rwsem(sem) \
-do { \
- static struct lock_class_key __key; \
- \
- __init_rwsem((sem), #sem, &__key); \
-} while (0)
-
-static inline void init_rwsem(struct rw_semaphore *sem)
-{
- sem->count = RWSEM_UNLOCKED_VALUE;
- spin_lock_init(&sem->wait_lock);
- INIT_LIST_HEAD(&sem->wait_list);
-}
/*
* lock for reading
@@ -179,10 +128,5 @@ static inline int rwsem_atomic_update(int delta, struct rw_semaphore *sem)
return atomic_add_return(delta, (atomic_t *)(&sem->count));
}
-static inline int rwsem_is_locked(struct rw_semaphore *sem)
-{
- return (sem->count != 0);
-}
-
#endif /* __KERNEL__ */
#endif /* _ASM_SH_RWSEM_H */
diff --git a/trunk/arch/sh/include/asm/sections.h b/trunk/arch/sh/include/asm/sections.h
index a78701da775b..4a5350037c8f 100644
--- a/trunk/arch/sh/include/asm/sections.h
+++ b/trunk/arch/sh/include/asm/sections.h
@@ -3,7 +3,7 @@
#include
-extern void __nosave_begin, __nosave_end;
+extern long __nosave_begin, __nosave_end;
extern long __machvec_start, __machvec_end;
extern char __uncached_start, __uncached_end;
extern char _ebss[];
diff --git a/trunk/arch/sh/kernel/cpu/sh4/setup-sh7750.c b/trunk/arch/sh/kernel/cpu/sh4/setup-sh7750.c
index 672944f5b19c..e53b4b38bd11 100644
--- a/trunk/arch/sh/kernel/cpu/sh4/setup-sh7750.c
+++ b/trunk/arch/sh/kernel/cpu/sh4/setup-sh7750.c
@@ -14,7 +14,7 @@
#include
#include
#include
-#include
+#include
static struct resource rtc_resources[] = {
[0] = {
@@ -255,12 +255,17 @@ static struct platform_device *sh7750_early_devices[] __initdata = {
void __init plat_early_device_setup(void)
{
+ struct platform_device *dev[1];
+
if (mach_is_rts7751r2d()) {
scif_platform_data.scscr |= SCSCR_CKE1;
- early_platform_add_devices(&scif_device, 1);
+ dev[0] = &scif_device;
+ early_platform_add_devices(dev, 1);
} else {
- early_platform_add_devices(&sci_device, 1);
- early_platform_add_devices(&scif_device, 1);
+ dev[0] = &sci_device;
+ early_platform_add_devices(dev, 1);
+ dev[0] = &scif_device;
+ early_platform_add_devices(dev, 1);
}
early_platform_add_devices(sh7750_early_devices,
diff --git a/trunk/arch/sh/kernel/vmlinux.lds.S b/trunk/arch/sh/kernel/vmlinux.lds.S
index 7f8a709c3ada..af4d46187a79 100644
--- a/trunk/arch/sh/kernel/vmlinux.lds.S
+++ b/trunk/arch/sh/kernel/vmlinux.lds.S
@@ -66,7 +66,7 @@ SECTIONS
__machvec_end = .;
}
- PERCPU(PAGE_SIZE)
+ PERCPU(L1_CACHE_BYTES, PAGE_SIZE)
/*
* .exit.text is discarded at runtime, not link time, to deal with
diff --git a/trunk/arch/sh/lib/delay.c b/trunk/arch/sh/lib/delay.c
index faa8f86c0db4..0901b2f14e15 100644
--- a/trunk/arch/sh/lib/delay.c
+++ b/trunk/arch/sh/lib/delay.c
@@ -10,6 +10,16 @@
void __delay(unsigned long loops)
{
__asm__ __volatile__(
+ /*
+ * ST40-300 appears to have an issue with this code,
+ * normally taking two cycles each loop, as with all
+ * other SH variants. If however the branch and the
+ * delay slot straddle an 8 byte boundary, this increases
+ * to 3 cycles.
+ * This align directive ensures this doesn't occur.
+ */
+ ".balign 8\n\t"
+
"tst %0, %0\n\t"
"1:\t"
"bf/s 1b\n\t"
diff --git a/trunk/arch/sh/mm/cache.c b/trunk/arch/sh/mm/cache.c
index 88d3dc3d30d5..5a580ea04429 100644
--- a/trunk/arch/sh/mm/cache.c
+++ b/trunk/arch/sh/mm/cache.c
@@ -108,7 +108,8 @@ void copy_user_highpage(struct page *to, struct page *from,
kunmap_atomic(vfrom, KM_USER0);
}
- if (pages_do_alias((unsigned long)vto, vaddr & PAGE_MASK))
+ if (pages_do_alias((unsigned long)vto, vaddr & PAGE_MASK) ||
+ (vma->vm_flags & VM_EXEC))
__flush_purge_region(vto, PAGE_SIZE);
kunmap_atomic(vto, KM_USER1);
diff --git a/trunk/arch/sparc/include/asm/fcntl.h b/trunk/arch/sparc/include/asm/fcntl.h
index 38f37b333cc7..d0b83f66f356 100644
--- a/trunk/arch/sparc/include/asm/fcntl.h
+++ b/trunk/arch/sparc/include/asm/fcntl.h
@@ -34,6 +34,8 @@
#define __O_SYNC 0x800000
#define O_SYNC (__O_SYNC|O_DSYNC)
+#define O_PATH 0x1000000
+
#define F_GETOWN 5 /* for sockets. */
#define F_SETOWN 6 /* for sockets. */
#define F_GETLK 7
diff --git a/trunk/arch/sparc/include/asm/futex_64.h b/trunk/arch/sparc/include/asm/futex_64.h
index 47f95839dc69..444e7bea23bc 100644
--- a/trunk/arch/sparc/include/asm/futex_64.h
+++ b/trunk/arch/sparc/include/asm/futex_64.h
@@ -30,7 +30,7 @@
: "r" (uaddr), "r" (oparg), "i" (-EFAULT) \
: "memory")
-static inline int futex_atomic_op_inuser(int encoded_op, int __user *uaddr)
+static inline int futex_atomic_op_inuser(int encoded_op, u32 __user *uaddr)
{
int op = (encoded_op >> 28) & 7;
int cmp = (encoded_op >> 24) & 15;
@@ -38,7 +38,7 @@ static inline int futex_atomic_op_inuser(int encoded_op, int __user *uaddr)
int cmparg = (encoded_op << 20) >> 20;
int oldval = 0, ret, tem;
- if (unlikely(!access_ok(VERIFY_WRITE, uaddr, sizeof(int))))
+ if (unlikely(!access_ok(VERIFY_WRITE, uaddr, sizeof(u32))))
return -EFAULT;
if (unlikely((((unsigned long) uaddr) & 0x3UL)))
return -EINVAL;
@@ -85,26 +85,30 @@ static inline int futex_atomic_op_inuser(int encoded_op, int __user *uaddr)
}
static inline int
-futex_atomic_cmpxchg_inatomic(int __user *uaddr, int oldval, int newval)
+futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
+ u32 oldval, u32 newval)
{
+ int ret = 0;
+
__asm__ __volatile__(
- "\n1: casa [%3] %%asi, %2, %0\n"
+ "\n1: casa [%4] %%asi, %3, %1\n"
"2:\n"
" .section .fixup,#alloc,#execinstr\n"
" .align 4\n"
"3: sethi %%hi(2b), %0\n"
" jmpl %0 + %%lo(2b), %%g0\n"
- " mov %4, %0\n"
+ " mov %5, %0\n"
" .previous\n"
" .section __ex_table,\"a\"\n"
" .align 4\n"
" .word 1b, 3b\n"
" .previous\n"
- : "=r" (newval)
- : "0" (newval), "r" (oldval), "r" (uaddr), "i" (-EFAULT)
+ : "+r" (ret), "=r" (newval)
+ : "1" (newval), "r" (oldval), "r" (uaddr), "i" (-EFAULT)
: "memory");
- return newval;
+ *uval = newval;
+ return ret;
}
#endif /* !(_SPARC64_FUTEX_H) */
diff --git a/trunk/arch/sparc/include/asm/pcr.h b/trunk/arch/sparc/include/asm/pcr.h
index a2f5c61f924e..843e4faf6a50 100644
--- a/trunk/arch/sparc/include/asm/pcr.h
+++ b/trunk/arch/sparc/include/asm/pcr.h
@@ -43,4 +43,6 @@ static inline u64 picl_value(unsigned int nmi_hz)
extern u64 pcr_enable;
+extern int pcr_arch_init(void);
+
#endif /* __PCR_H */
diff --git a/trunk/arch/sparc/include/asm/rwsem.h b/trunk/arch/sparc/include/asm/rwsem.h
index a2b4302869bc..069bf4d663a1 100644
--- a/trunk/arch/sparc/include/asm/rwsem.h
+++ b/trunk/arch/sparc/include/asm/rwsem.h
@@ -13,53 +13,12 @@
#ifdef __KERNEL__
-#include
-#include
-
-struct rwsem_waiter;
-
-struct rw_semaphore {
- signed long count;
#define RWSEM_UNLOCKED_VALUE 0x00000000L
#define RWSEM_ACTIVE_BIAS 0x00000001L
#define RWSEM_ACTIVE_MASK 0xffffffffL
#define RWSEM_WAITING_BIAS (-RWSEM_ACTIVE_MASK-1)
#define RWSEM_ACTIVE_READ_BIAS RWSEM_ACTIVE_BIAS
#define RWSEM_ACTIVE_WRITE_BIAS (RWSEM_WAITING_BIAS + RWSEM_ACTIVE_BIAS)
- spinlock_t wait_lock;
- struct list_head wait_list;
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
- struct lockdep_map dep_map;
-#endif
-};
-
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
-# define __RWSEM_DEP_MAP_INIT(lockname) , .dep_map = { .name = #lockname }
-#else
-# define __RWSEM_DEP_MAP_INIT(lockname)
-#endif
-
-#define __RWSEM_INITIALIZER(name) \
-{ RWSEM_UNLOCKED_VALUE, __SPIN_LOCK_UNLOCKED((name).wait_lock), \
- LIST_HEAD_INIT((name).wait_list) __RWSEM_DEP_MAP_INIT(name) }
-
-#define DECLARE_RWSEM(name) \
- struct rw_semaphore name = __RWSEM_INITIALIZER(name)
-
-extern struct rw_semaphore *rwsem_down_read_failed(struct rw_semaphore *sem);
-extern struct rw_semaphore *rwsem_down_write_failed(struct rw_semaphore *sem);
-extern struct rw_semaphore *rwsem_wake(struct rw_semaphore *sem);
-extern struct rw_semaphore *rwsem_downgrade_wake(struct rw_semaphore *sem);
-
-extern void __init_rwsem(struct rw_semaphore *sem, const char *name,
- struct lock_class_key *key);
-
-#define init_rwsem(sem) \
-do { \
- static struct lock_class_key __key; \
- \
- __init_rwsem((sem), #sem, &__key); \
-} while (0)
/*
* lock for reading
@@ -160,11 +119,6 @@ static inline long rwsem_atomic_update(long delta, struct rw_semaphore *sem)
return atomic64_add_return(delta, (atomic64_t *)(&sem->count));
}
-static inline int rwsem_is_locked(struct rw_semaphore *sem)
-{
- return (sem->count != 0);
-}
-
#endif /* __KERNEL__ */
#endif /* _SPARC64_RWSEM_H */
diff --git a/trunk/arch/sparc/kernel/iommu.c b/trunk/arch/sparc/kernel/iommu.c
index 47977a77f6c6..72509d0e34be 100644
--- a/trunk/arch/sparc/kernel/iommu.c
+++ b/trunk/arch/sparc/kernel/iommu.c
@@ -255,10 +255,9 @@ static inline iopte_t *alloc_npages(struct device *dev, struct iommu *iommu,
static int iommu_alloc_ctx(struct iommu *iommu)
{
int lowest = iommu->ctx_lowest_free;
- int sz = IOMMU_NUM_CTXS - lowest;
- int n = find_next_zero_bit(iommu->ctx_bitmap, sz, lowest);
+ int n = find_next_zero_bit(iommu->ctx_bitmap, IOMMU_NUM_CTXS, lowest);
- if (unlikely(n == sz)) {
+ if (unlikely(n == IOMMU_NUM_CTXS)) {
n = find_next_zero_bit(iommu->ctx_bitmap, lowest, 1);
if (unlikely(n == lowest)) {
printk(KERN_WARNING "IOMMU: Ran out of contexts.\n");
diff --git a/trunk/arch/sparc/kernel/pcic.c b/trunk/arch/sparc/kernel/pcic.c
index aeaa09a3c655..2cdc131b50ac 100644
--- a/trunk/arch/sparc/kernel/pcic.c
+++ b/trunk/arch/sparc/kernel/pcic.c
@@ -700,10 +700,8 @@ static void pcic_clear_clock_irq(void)
static irqreturn_t pcic_timer_handler (int irq, void *h)
{
- write_seqlock(&xtime_lock); /* Dummy, to show that we remember */
pcic_clear_clock_irq();
- do_timer(1);
- write_sequnlock(&xtime_lock);
+ xtime_update(1);
#ifndef CONFIG_SMP
update_process_times(user_mode(get_irq_regs()));
#endif
diff --git a/trunk/arch/sparc/kernel/pcr.c b/trunk/arch/sparc/kernel/pcr.c
index ae96cf52a955..7c2ced612b8f 100644
--- a/trunk/arch/sparc/kernel/pcr.c
+++ b/trunk/arch/sparc/kernel/pcr.c
@@ -167,5 +167,3 @@ int __init pcr_arch_init(void)
unregister_perf_hsvc();
return err;
}
-
-early_initcall(pcr_arch_init);
diff --git a/trunk/arch/sparc/kernel/smp_64.c b/trunk/arch/sparc/kernel/smp_64.c
index b6a2b8f47040..555a76d1f4a1 100644
--- a/trunk/arch/sparc/kernel/smp_64.c
+++ b/trunk/arch/sparc/kernel/smp_64.c
@@ -49,6 +49,7 @@
#include
#include
#include
+#include
#include "cpumap.h"
@@ -1358,6 +1359,7 @@ void __cpu_die(unsigned int cpu)
void __init smp_cpus_done(unsigned int max_cpus)
{
+ pcr_arch_init();
}
void smp_send_reschedule(int cpu)
diff --git a/trunk/arch/sparc/kernel/time_32.c b/trunk/arch/sparc/kernel/time_32.c
index 9c743b1886ff..4211bfc9bcad 100644
--- a/trunk/arch/sparc/kernel/time_32.c
+++ b/trunk/arch/sparc/kernel/time_32.c
@@ -85,7 +85,7 @@ int update_persistent_clock(struct timespec now)
/*
* timer_interrupt() needs to keep up the real-time clock,
- * as well as call the "do_timer()" routine every clocktick
+ * as well as call the "xtime_update()" routine every clocktick
*/
#define TICK_SIZE (tick_nsec / 1000)
@@ -96,14 +96,9 @@ static irqreturn_t timer_interrupt(int dummy, void *dev_id)
profile_tick(CPU_PROFILING);
#endif
- /* Protect counter clear so that do_gettimeoffset works */
- write_seqlock(&xtime_lock);
-
clear_clock_irq();
- do_timer(1);
-
- write_sequnlock(&xtime_lock);
+ xtime_update(1);
#ifndef CONFIG_SMP
update_process_times(user_mode(get_irq_regs()));
diff --git a/trunk/arch/sparc/kernel/una_asm_32.S b/trunk/arch/sparc/kernel/una_asm_32.S
index 8cc03458eb7e..8f096e84a937 100644
--- a/trunk/arch/sparc/kernel/una_asm_32.S
+++ b/trunk/arch/sparc/kernel/una_asm_32.S
@@ -24,9 +24,9 @@ retl_efault:
.globl __do_int_store
__do_int_store:
ld [%o2], %g1
- cmp %1, 2
+ cmp %o1, 2
be 2f
- cmp %1, 4
+ cmp %o1, 4
be 1f
srl %g1, 24, %g2
srl %g1, 16, %g7
diff --git a/trunk/arch/sparc/kernel/vmlinux.lds.S b/trunk/arch/sparc/kernel/vmlinux.lds.S
index 0c1e6783657f..92b557afe535 100644
--- a/trunk/arch/sparc/kernel/vmlinux.lds.S
+++ b/trunk/arch/sparc/kernel/vmlinux.lds.S
@@ -108,7 +108,7 @@ SECTIONS
__sun4v_2insn_patch_end = .;
}
- PERCPU(PAGE_SIZE)
+ PERCPU(SMP_CACHE_BYTES, PAGE_SIZE)
. = ALIGN(PAGE_SIZE);
__init_end = .;
diff --git a/trunk/arch/sparc/lib/atomic32.c b/trunk/arch/sparc/lib/atomic32.c
index cbddeb38ffda..d3c7a12ad879 100644
--- a/trunk/arch/sparc/lib/atomic32.c
+++ b/trunk/arch/sparc/lib/atomic32.c
@@ -16,7 +16,7 @@
#define ATOMIC_HASH(a) (&__atomic_hash[(((unsigned long)a)>>8) & (ATOMIC_HASH_SIZE-1)])
spinlock_t __atomic_hash[ATOMIC_HASH_SIZE] = {
- [0 ... (ATOMIC_HASH_SIZE-1)] = SPIN_LOCK_UNLOCKED
+ [0 ... (ATOMIC_HASH_SIZE-1)] = __SPIN_LOCK_UNLOCKED(__atomic_hash)
};
#else /* SMP */
diff --git a/trunk/arch/sparc/lib/bitext.c b/trunk/arch/sparc/lib/bitext.c
index 764b3eb7b604..48d00e72ce15 100644
--- a/trunk/arch/sparc/lib/bitext.c
+++ b/trunk/arch/sparc/lib/bitext.c
@@ -10,7 +10,7 @@
*/
#include
-#include
+#include
#include
@@ -80,8 +80,7 @@ int bit_map_string_get(struct bit_map *t, int len, int align)
while (test_bit(offset + i, t->map) == 0) {
i++;
if (i == len) {
- for (i = 0; i < len; i++)
- __set_bit(offset + i, t->map);
+ bitmap_set(t->map, offset, len);
if (offset == t->first_free)
t->first_free = find_next_zero_bit
(t->map, t->size,
diff --git a/trunk/arch/tile/include/asm/futex.h b/trunk/arch/tile/include/asm/futex.h
index fe0d10dcae57..d03ec124a598 100644
--- a/trunk/arch/tile/include/asm/futex.h
+++ b/trunk/arch/tile/include/asm/futex.h
@@ -29,16 +29,16 @@
#include
#include
-extern struct __get_user futex_set(int __user *v, int i);
-extern struct __get_user futex_add(int __user *v, int n);
-extern struct __get_user futex_or(int __user *v, int n);
-extern struct __get_user futex_andn(int __user *v, int n);
-extern struct __get_user futex_cmpxchg(int __user *v, int o, int n);
+extern struct __get_user futex_set(u32 __user *v, int i);
+extern struct __get_user futex_add(u32 __user *v, int n);
+extern struct __get_user futex_or(u32 __user *v, int n);
+extern struct __get_user futex_andn(u32 __user *v, int n);
+extern struct __get_user futex_cmpxchg(u32 __user *v, int o, int n);
#ifndef __tilegx__
-extern struct __get_user futex_xor(int __user *v, int n);
+extern struct __get_user futex_xor(u32 __user *v, int n);
#else
-static inline struct __get_user futex_xor(int __user *uaddr, int n)
+static inline struct __get_user futex_xor(u32 __user *uaddr, int n)
{
struct __get_user asm_ret = __get_user_4(uaddr);
if (!asm_ret.err) {
@@ -53,7 +53,7 @@ static inline struct __get_user futex_xor(int __user *uaddr, int n)
}
#endif
-static inline int futex_atomic_op_inuser(int encoded_op, int __user *uaddr)
+static inline int futex_atomic_op_inuser(int encoded_op, u32 __user *uaddr)
{
int op = (encoded_op >> 28) & 7;
int cmp = (encoded_op >> 24) & 15;
@@ -65,7 +65,7 @@ static inline int futex_atomic_op_inuser(int encoded_op, int __user *uaddr)
if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28))
oparg = 1 << oparg;
- if (!access_ok(VERIFY_WRITE, uaddr, sizeof(int)))
+ if (!access_ok(VERIFY_WRITE, uaddr, sizeof(u32)))
return -EFAULT;
pagefault_disable();
@@ -119,16 +119,17 @@ static inline int futex_atomic_op_inuser(int encoded_op, int __user *uaddr)
return ret;
}
-static inline int futex_atomic_cmpxchg_inatomic(int __user *uaddr, int oldval,
- int newval)
+static inline int futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
+ u32 oldval, u32 newval)
{
struct __get_user asm_ret;
- if (!access_ok(VERIFY_WRITE, uaddr, sizeof(int)))
+ if (!access_ok(VERIFY_WRITE, uaddr, sizeof(u32)))
return -EFAULT;
asm_ret = futex_cmpxchg(uaddr, oldval, newval);
- return asm_ret.err ? asm_ret.err : asm_ret.val;
+ *uval = asm_ret.val;
+ return asm_ret.err;
}
#ifndef __tilegx__
diff --git a/trunk/arch/tile/kernel/vmlinux.lds.S b/trunk/arch/tile/kernel/vmlinux.lds.S
index 25fdc0c1839a..c6ce378e0678 100644
--- a/trunk/arch/tile/kernel/vmlinux.lds.S
+++ b/trunk/arch/tile/kernel/vmlinux.lds.S
@@ -63,7 +63,7 @@ SECTIONS
*(.init.page)
} :data =0
INIT_DATA_SECTION(16)
- PERCPU(PAGE_SIZE)
+ PERCPU(L2_CACHE_BYTES, PAGE_SIZE)
. = ALIGN(PAGE_SIZE);
VMLINUX_SYMBOL(_einitdata) = .;
diff --git a/trunk/arch/um/Kconfig.common b/trunk/arch/um/Kconfig.common
index e351e14b4339..1e78940218c0 100644
--- a/trunk/arch/um/Kconfig.common
+++ b/trunk/arch/um/Kconfig.common
@@ -7,6 +7,7 @@ config UML
bool
default y
select HAVE_GENERIC_HARDIRQS
+ select GENERIC_HARDIRQS_NO_DEPRECATED
config MMU
bool
diff --git a/trunk/arch/um/Kconfig.x86 b/trunk/arch/um/Kconfig.x86
index 5ee328099c63..02fb017fed47 100644
--- a/trunk/arch/um/Kconfig.x86
+++ b/trunk/arch/um/Kconfig.x86
@@ -10,6 +10,8 @@ endmenu
config UML_X86
def_bool y
+ select GENERIC_FIND_FIRST_BIT
+ select GENERIC_FIND_NEXT_BIT
config 64BIT
bool
@@ -19,6 +21,9 @@ config X86_32
def_bool !64BIT
select HAVE_AOUT
+config X86_64
+ def_bool 64BIT
+
config RWSEM_XCHGADD_ALGORITHM
def_bool X86_XADD
diff --git a/trunk/arch/um/drivers/mconsole_kern.c b/trunk/arch/um/drivers/mconsole_kern.c
index 975613b23dcf..c70e047eed72 100644
--- a/trunk/arch/um/drivers/mconsole_kern.c
+++ b/trunk/arch/um/drivers/mconsole_kern.c
@@ -124,35 +124,18 @@ void mconsole_log(struct mc_request *req)
#if 0
void mconsole_proc(struct mc_request *req)
{
- struct nameidata nd;
struct vfsmount *mnt = current->nsproxy->pid_ns->proc_mnt;
struct file *file;
- int n, err;
+ int n;
char *ptr = req->request.data, *buf;
mm_segment_t old_fs = get_fs();
ptr += strlen("proc");
ptr = skip_spaces(ptr);
- err = vfs_path_lookup(mnt->mnt_root, mnt, ptr, LOOKUP_FOLLOW, &nd);
- if (err) {
- mconsole_reply(req, "Failed to look up file", 1, 0);
- goto out;
- }
-
- err = may_open(&nd.path, MAY_READ, O_RDONLY);
- if (result) {
- mconsole_reply(req, "Failed to open file", 1, 0);
- path_put(&nd.path);
- goto out;
- }
-
- file = dentry_open(nd.path.dentry, nd.path.mnt, O_RDONLY,
- current_cred());
- err = PTR_ERR(file);
+ file = file_open_root(mnt->mnt_root, mnt, ptr, O_RDONLY);
if (IS_ERR(file)) {
mconsole_reply(req, "Failed to open file", 1, 0);
- path_put(&nd.path);
goto out;
}
diff --git a/trunk/arch/um/drivers/ubd_kern.c b/trunk/arch/um/drivers/ubd_kern.c
index ba4a98ba39c0..620f5b70957d 100644
--- a/trunk/arch/um/drivers/ubd_kern.c
+++ b/trunk/arch/um/drivers/ubd_kern.c
@@ -185,7 +185,7 @@ struct ubd {
.no_cow = 0, \
.shared = 0, \
.cow = DEFAULT_COW, \
- .lock = SPIN_LOCK_UNLOCKED, \
+ .lock = __SPIN_LOCK_UNLOCKED(ubd_devs.lock), \
.request = NULL, \
.start_sg = 0, \
.end_sg = 0, \
diff --git a/trunk/arch/um/include/asm/common.lds.S b/trunk/arch/um/include/asm/common.lds.S
index ac55b9efa1ce..34bede8aad4a 100644
--- a/trunk/arch/um/include/asm/common.lds.S
+++ b/trunk/arch/um/include/asm/common.lds.S
@@ -42,7 +42,7 @@
INIT_SETUP(0)
}
- PERCPU(32)
+ PERCPU(32, 32)
.initcall.init : {
INIT_CALLS
diff --git a/trunk/arch/um/kernel/irq.c b/trunk/arch/um/kernel/irq.c
index 3f0ac9e0c966..64cfea80cfe2 100644
--- a/trunk/arch/um/kernel/irq.c
+++ b/trunk/arch/um/kernel/irq.c
@@ -35,8 +35,10 @@ int show_interrupts(struct seq_file *p, void *v)
}
if (i < NR_IRQS) {
- raw_spin_lock_irqsave(&irq_desc[i].lock, flags);
- action = irq_desc[i].action;
+ struct irq_desc *desc = irq_to_desc(i);
+
+ raw_spin_lock_irqsave(&desc->lock, flags);
+ action = desc->action;
if (!action)
goto skip;
seq_printf(p, "%3d: ",i);
@@ -46,7 +48,7 @@ int show_interrupts(struct seq_file *p, void *v)
for_each_online_cpu(j)
seq_printf(p, "%10u ", kstat_irqs_cpu(i, j));
#endif
- seq_printf(p, " %14s", irq_desc[i].chip->name);
+ seq_printf(p, " %14s", get_irq_desc_chip(desc)->name);
seq_printf(p, " %s", action->name);
for (action=action->next; action; action = action->next)
@@ -54,7 +56,7 @@ int show_interrupts(struct seq_file *p, void *v)
seq_putc(p, '\n');
skip:
- raw_spin_unlock_irqrestore(&irq_desc[i].lock, flags);
+ raw_spin_unlock_irqrestore(&desc->lock, flags);
} else if (i == NR_IRQS)
seq_putc(p, '\n');
@@ -360,10 +362,10 @@ EXPORT_SYMBOL(um_request_irq);
EXPORT_SYMBOL(reactivate_fd);
/*
- * irq_chip must define (startup || enable) &&
- * (shutdown || disable) && end
+ * irq_chip must define at least enable/disable and ack when
+ * the edge handler is used.
*/
-static void dummy(unsigned int irq)
+static void dummy(struct irq_data *d)
{
}
@@ -371,20 +373,17 @@ static void dummy(unsigned int irq)
static struct irq_chip normal_irq_type = {
.name = "SIGIO",
.release = free_irq_by_irq_and_dev,
- .disable = dummy,
- .enable = dummy,
- .ack = dummy,
- .end = dummy
+ .irq_disable = dummy,
+ .irq_enable = dummy,
+ .irq_ack = dummy,
};
static struct irq_chip SIGVTALRM_irq_type = {
.name = "SIGVTALRM",
.release = free_irq_by_irq_and_dev,
- .shutdown = dummy, /* never called */
- .disable = dummy,
- .enable = dummy,
- .ack = dummy,
- .end = dummy
+ .irq_disable = dummy,
+ .irq_enable = dummy,
+ .irq_ack = dummy,
};
void __init init_IRQ(void)
diff --git a/trunk/arch/x86/Kconfig b/trunk/arch/x86/Kconfig
index 1359bc9f4fd3..e1f65c46bc93 100644
--- a/trunk/arch/x86/Kconfig
+++ b/trunk/arch/x86/Kconfig
@@ -64,8 +64,12 @@ config X86
select HAVE_TEXT_POKE_SMP
select HAVE_GENERIC_HARDIRQS
select HAVE_SPARSE_IRQ
+ select GENERIC_FIND_FIRST_BIT
+ select GENERIC_FIND_NEXT_BIT
select GENERIC_IRQ_PROBE
select GENERIC_PENDING_IRQ if SMP
+ select GENERIC_IRQ_SHOW
+ select IRQ_FORCED_THREADING
select USE_GENERIC_SMP_HELPERS if SMP
config INSTRUCTION_DECODER
@@ -378,6 +382,8 @@ config X86_INTEL_CE
depends on X86_32
depends on X86_EXTENDED_PLATFORM
select X86_REBOOTFIXUPS
+ select OF
+ select OF_EARLY_FLATTREE
---help---
Select for the Intel CE media processor (CE4100) SOC.
This option compiles in support for the CE4100 SOC for settop
@@ -807,7 +813,7 @@ config X86_LOCAL_APIC
config X86_IO_APIC
def_bool y
- depends on X86_64 || SMP || X86_32_NON_STANDARD || X86_UP_APIC
+ depends on X86_64 || SMP || X86_32_NON_STANDARD || X86_UP_IOAPIC
config X86_VISWS_APIC
def_bool y
@@ -1701,7 +1707,7 @@ config HAVE_ARCH_EARLY_PFN_TO_NID
depends on NUMA
config USE_PERCPU_NUMA_NODE_ID
- def_bool X86_64
+ def_bool y
depends on NUMA
menu "Power management and ACPI options"
@@ -2062,9 +2068,10 @@ config SCx200HR_TIMER
config OLPC
bool "One Laptop Per Child support"
+ depends on !X86_PAE
select GPIOLIB
- select OLPC_OPENFIRMWARE
- depends on !X86_64 && !X86_PAE
+ select OF
+ select OF_PROMTREE if PROC_DEVICETREE
---help---
Add support for detecting the unique features of the OLPC
XO hardware.
@@ -2075,21 +2082,6 @@ config OLPC_XO1
---help---
Add support for non-essential features of the OLPC XO-1 laptop.
-config OLPC_OPENFIRMWARE
- bool "Support for OLPC's Open Firmware"
- depends on !X86_64 && !X86_PAE
- default n
- select OF
- help
- This option adds support for the implementation of Open Firmware
- that is used on the OLPC XO-1 Children's Machine.
- If unsure, say N here.
-
-config OLPC_OPENFIRMWARE_DT
- bool
- default y if OLPC_OPENFIRMWARE && PROC_DEVICETREE
- select OF_PROMTREE
-
endif # X86_32
config AMD_NB
@@ -2134,6 +2126,11 @@ config SYSVIPC_COMPAT
def_bool y
depends on COMPAT && SYSVIPC
+config KEYS_COMPAT
+ bool
+ depends on COMPAT && KEYS
+ default y
+
endmenu
diff --git a/trunk/arch/x86/Kconfig.cpu b/trunk/arch/x86/Kconfig.cpu
index 283c5a6a03a6..ed47e6e1747f 100644
--- a/trunk/arch/x86/Kconfig.cpu
+++ b/trunk/arch/x86/Kconfig.cpu
@@ -294,11 +294,6 @@ config X86_GENERIC
endif
-config X86_CPU
- def_bool y
- select GENERIC_FIND_FIRST_BIT
- select GENERIC_FIND_NEXT_BIT
-
#
# Define implied options from the CPU selection here
config X86_INTERNODE_CACHE_SHIFT
diff --git a/trunk/arch/x86/boot/compressed/mkpiggy.c b/trunk/arch/x86/boot/compressed/mkpiggy.c
index 646aa78ba5fd..46a823882437 100644
--- a/trunk/arch/x86/boot/compressed/mkpiggy.c
+++ b/trunk/arch/x86/boot/compressed/mkpiggy.c
@@ -62,7 +62,12 @@ int main(int argc, char *argv[])
if (fseek(f, -4L, SEEK_END)) {
perror(argv[1]);
}
- fread(&olen, sizeof olen, 1, f);
+
+ if (fread(&olen, sizeof(olen), 1, f) != 1) {
+ perror(argv[1]);
+ return 1;
+ }
+
ilen = ftell(f);
olen = getle32(&olen);
fclose(f);
diff --git a/trunk/arch/x86/crypto/aesni-intel_glue.c b/trunk/arch/x86/crypto/aesni-intel_glue.c
index e1e60c7d5813..e0e6340c8dad 100644
--- a/trunk/arch/x86/crypto/aesni-intel_glue.c
+++ b/trunk/arch/x86/crypto/aesni-intel_glue.c
@@ -873,22 +873,18 @@ rfc4106_set_hash_subkey(u8 *hash_subkey, const u8 *key, unsigned int key_len)
crypto_ablkcipher_clear_flags(ctr_tfm, ~0);
ret = crypto_ablkcipher_setkey(ctr_tfm, key, key_len);
- if (ret) {
- crypto_free_ablkcipher(ctr_tfm);
- return ret;
- }
+ if (ret)
+ goto out_free_ablkcipher;
+ ret = -ENOMEM;
req = ablkcipher_request_alloc(ctr_tfm, GFP_KERNEL);
- if (!req) {
- crypto_free_ablkcipher(ctr_tfm);
- return -EINVAL;
- }
+ if (!req)
+ goto out_free_ablkcipher;
req_data = kmalloc(sizeof(*req_data), GFP_KERNEL);
- if (!req_data) {
- crypto_free_ablkcipher(ctr_tfm);
- return -ENOMEM;
- }
+ if (!req_data)
+ goto out_free_request;
+
memset(req_data->iv, 0, sizeof(req_data->iv));
/* Clear the data in the hash sub key container to zero.*/
@@ -913,8 +909,10 @@ rfc4106_set_hash_subkey(u8 *hash_subkey, const u8 *key, unsigned int key_len)
if (!ret)
ret = req_data->result.err;
}
- ablkcipher_request_free(req);
kfree(req_data);
+out_free_request:
+ ablkcipher_request_free(req);
+out_free_ablkcipher:
crypto_free_ablkcipher(ctr_tfm);
return ret;
}
diff --git a/trunk/arch/x86/ia32/ia32entry.S b/trunk/arch/x86/ia32/ia32entry.S
index 518bb99c3394..430312ba6e3f 100644
--- a/trunk/arch/x86/ia32/ia32entry.S
+++ b/trunk/arch/x86/ia32/ia32entry.S
@@ -25,6 +25,8 @@
#define sysretl_audit ia32_ret_from_sys_call
#endif
+ .section .entry.text, "ax"
+
#define IA32_NR_syscalls ((ia32_syscall_end - ia32_sys_call_table)/8)
.macro IA32_ARG_FIXUP noebp=0
@@ -126,26 +128,20 @@ ENTRY(ia32_sysenter_target)
*/
ENABLE_INTERRUPTS(CLBR_NONE)
movl %ebp,%ebp /* zero extension */
- pushq $__USER32_DS
- CFI_ADJUST_CFA_OFFSET 8
+ pushq_cfi $__USER32_DS
/*CFI_REL_OFFSET ss,0*/
- pushq %rbp
- CFI_ADJUST_CFA_OFFSET 8
+ pushq_cfi %rbp
CFI_REL_OFFSET rsp,0
- pushfq
- CFI_ADJUST_CFA_OFFSET 8
+ pushfq_cfi
/*CFI_REL_OFFSET rflags,0*/
movl 8*3-THREAD_SIZE+TI_sysenter_return(%rsp), %r10d
CFI_REGISTER rip,r10
- pushq $__USER32_CS
- CFI_ADJUST_CFA_OFFSET 8
+ pushq_cfi $__USER32_CS
/*CFI_REL_OFFSET cs,0*/
movl %eax, %eax
- pushq %r10
- CFI_ADJUST_CFA_OFFSET 8
+ pushq_cfi %r10
CFI_REL_OFFSET rip,0
- pushq %rax
- CFI_ADJUST_CFA_OFFSET 8
+ pushq_cfi %rax
cld
SAVE_ARGS 0,0,1
/* no need to do an access_ok check here because rbp has been
@@ -182,11 +178,9 @@ sysexit_from_sys_call:
xorq %r9,%r9
xorq %r10,%r10
xorq %r11,%r11
- popfq
- CFI_ADJUST_CFA_OFFSET -8
+ popfq_cfi
/*CFI_RESTORE rflags*/
- popq %rcx /* User %esp */
- CFI_ADJUST_CFA_OFFSET -8
+ popq_cfi %rcx /* User %esp */
CFI_REGISTER rsp,rcx
TRACE_IRQS_ON
ENABLE_INTERRUPTS_SYSEXIT32
@@ -421,8 +415,7 @@ ENTRY(ia32_syscall)
*/
ENABLE_INTERRUPTS(CLBR_NONE)
movl %eax,%eax
- pushq %rax
- CFI_ADJUST_CFA_OFFSET 8
+ pushq_cfi %rax
cld
/* note the registers are not zero extended to the sf.
this could be a problem. */
@@ -851,4 +844,7 @@ ia32_sys_call_table:
.quad sys_fanotify_init
.quad sys32_fanotify_mark
.quad sys_prlimit64 /* 340 */
+ .quad sys_name_to_handle_at
+ .quad compat_sys_open_by_handle_at
+ .quad compat_sys_clock_adjtime
ia32_syscall_end:
diff --git a/trunk/arch/x86/include/asm/acpi.h b/trunk/arch/x86/include/asm/acpi.h
index 4784df504d28..448d73a371ba 100644
--- a/trunk/arch/x86/include/asm/acpi.h
+++ b/trunk/arch/x86/include/asm/acpi.h
@@ -89,6 +89,7 @@ extern int acpi_disabled;
extern int acpi_pci_disabled;
extern int acpi_skip_timer_override;
extern int acpi_use_timer_override;
+extern int acpi_fix_pin2_polarity;
extern u8 acpi_sci_flags;
extern int acpi_sci_override_gsi;
@@ -187,15 +188,7 @@ struct bootnode;
#ifdef CONFIG_ACPI_NUMA
extern int acpi_numa;
-extern void acpi_get_nodes(struct bootnode *physnodes, unsigned long start,
- unsigned long end);
-extern int acpi_scan_nodes(unsigned long start, unsigned long end);
-#define NR_NODE_MEMBLKS (MAX_NUMNODES*2)
-
-#ifdef CONFIG_NUMA_EMU
-extern void acpi_fake_nodes(const struct bootnode *fake_nodes,
- int num_nodes);
-#endif
+extern int x86_acpi_numa_init(void);
#endif /* CONFIG_ACPI_NUMA */
#define acpi_unlazy_tlb(x) leave_mm(x)
diff --git a/trunk/arch/x86/include/asm/amd_nb.h b/trunk/arch/x86/include/asm/amd_nb.h
index 64dc82ee19f0..e264ae5a1443 100644
--- a/trunk/arch/x86/include/asm/amd_nb.h
+++ b/trunk/arch/x86/include/asm/amd_nb.h
@@ -9,23 +9,20 @@ struct amd_nb_bus_dev_range {
u8 dev_limit;
};
-extern struct pci_device_id amd_nb_misc_ids[];
+extern const struct pci_device_id amd_nb_misc_ids[];
extern const struct amd_nb_bus_dev_range amd_nb_bus_dev_ranges[];
struct bootnode;
extern int early_is_amd_nb(u32 value);
extern int amd_cache_northbridges(void);
extern void amd_flush_garts(void);
-extern int amd_numa_init(unsigned long start_pfn, unsigned long end_pfn);
-extern int amd_scan_nodes(void);
-
-#ifdef CONFIG_NUMA_EMU
-extern void amd_fake_nodes(const struct bootnode *nodes, int nr_nodes);
-extern void amd_get_nodes(struct bootnode *nodes);
-#endif
+extern int amd_numa_init(void);
+extern int amd_get_subcaches(int);
+extern int amd_set_subcaches(int, int);
struct amd_northbridge {
struct pci_dev *misc;
+ struct pci_dev *link;
};
struct amd_northbridge_info {
@@ -37,6 +34,7 @@ extern struct amd_northbridge_info amd_northbridges;
#define AMD_NB_GART 0x1
#define AMD_NB_L3_INDEX_DISABLE 0x2
+#define AMD_NB_L3_PARTITIONING 0x4
#ifdef CONFIG_AMD_NB
diff --git a/trunk/arch/x86/include/asm/apic.h b/trunk/arch/x86/include/asm/apic.h
index 3c896946f4cc..a279d98ea95e 100644
--- a/trunk/arch/x86/include/asm/apic.h
+++ b/trunk/arch/x86/include/asm/apic.h
@@ -220,7 +220,6 @@ extern void enable_IR_x2apic(void);
extern int get_physical_broadcast(void);
-extern void apic_disable(void);
extern int lapic_get_maxlvt(void);
extern void clear_local_APIC(void);
extern void connect_bsp_APIC(void);
@@ -228,7 +227,6 @@ extern void disconnect_bsp_APIC(int virt_wire_setup);
extern void disable_local_APIC(void);
extern void lapic_shutdown(void);
extern int verify_local_APIC(void);
-extern void cache_APIC_registers(void);
extern void sync_Arb_IDs(void);
extern void init_bsp_APIC(void);
extern void setup_local_APIC(void);
@@ -239,8 +237,7 @@ void register_lapic_address(unsigned long address);
extern void setup_boot_APIC_clock(void);
extern void setup_secondary_APIC_clock(void);
extern int APIC_init_uniprocessor(void);
-extern void enable_NMI_through_LVT0(void);
-extern int apic_force_enable(void);
+extern int apic_force_enable(unsigned long addr);
/*
* On 32bit this is mach-xxx local
@@ -261,7 +258,6 @@ static inline void lapic_shutdown(void) { }
#define local_apic_timer_c2_ok 1
static inline void init_apic_mappings(void) { }
static inline void disable_local_APIC(void) { }
-static inline void apic_disable(void) { }
# define setup_boot_APIC_clock x86_init_noop
# define setup_secondary_APIC_clock x86_init_noop
#endif /* !CONFIG_X86_LOCAL_APIC */
@@ -307,8 +303,6 @@ struct apic {
void (*setup_apic_routing)(void);
int (*multi_timer_check)(int apic, int irq);
- int (*apicid_to_node)(int logical_apicid);
- int (*cpu_to_logical_apicid)(int cpu);
int (*cpu_present_to_apicid)(int mps_cpu);
void (*apicid_to_cpu_present)(int phys_apicid, physid_mask_t *retmap);
void (*setup_portio_remap)(void);
@@ -356,6 +350,23 @@ struct apic {
void (*icr_write)(u32 low, u32 high);
void (*wait_icr_idle)(void);
u32 (*safe_wait_icr_idle)(void);
+
+#ifdef CONFIG_X86_32
+ /*
+ * Called very early during boot from get_smp_config(). It should
+ * return the logical apicid. x86_[bios]_cpu_to_apicid is
+ * initialized before this function is called.
+ *
+ * If logical apicid can't be determined that early, the function
+ * may return BAD_APICID. Logical apicid will be configured after
+ * init_apic_ldr() while bringing up CPUs. Note that NUMA affinity
+ * won't be applied properly during early boot in this case.
+ */
+ int (*x86_32_early_logical_apicid)(int cpu);
+
+ /* determine CPU -> NUMA node mapping */
+ int (*x86_32_numa_cpu_node)(int cpu);
+#endif
};
/*
@@ -503,6 +514,11 @@ extern struct apic apic_noop;
extern struct apic apic_default;
+static inline int noop_x86_32_early_logical_apicid(int cpu)
+{
+ return BAD_APICID;
+}
+
/*
* Set up the logical destination ID.
*
@@ -522,7 +538,7 @@ static inline int default_phys_pkg_id(int cpuid_apic, int index_msb)
return cpuid_apic >> index_msb;
}
-extern int default_apicid_to_node(int logical_apicid);
+extern int default_x86_32_numa_cpu_node(int cpu);
#endif
@@ -558,12 +574,6 @@ static inline void default_ioapic_phys_id_map(physid_mask_t *phys_map, physid_ma
*retmap = *phys_map;
}
-/* Mapping from cpu number to logical apicid */
-static inline int default_cpu_to_logical_apicid(int cpu)
-{
- return 1 << cpu;
-}
-
static inline int __default_cpu_present_to_apicid(int mps_cpu)
{
if (mps_cpu < nr_cpu_ids && cpu_present(mps_cpu))
@@ -596,8 +606,4 @@ extern int default_check_phys_apicid_present(int phys_apicid);
#endif /* CONFIG_X86_LOCAL_APIC */
-#ifdef CONFIG_X86_32
-extern u8 cpu_2_logical_apicid[NR_CPUS];
-#endif
-
#endif /* _ASM_X86_APIC_H */
diff --git a/trunk/arch/x86/include/asm/apicdef.h b/trunk/arch/x86/include/asm/apicdef.h
index 47a30ff8e517..d87988bacf3e 100644
--- a/trunk/arch/x86/include/asm/apicdef.h
+++ b/trunk/arch/x86/include/asm/apicdef.h
@@ -426,4 +426,16 @@ struct local_apic {
#else
#define BAD_APICID 0xFFFFu
#endif
+
+enum ioapic_irq_destination_types {
+ dest_Fixed = 0,
+ dest_LowestPrio = 1,
+ dest_SMI = 2,
+ dest__reserved_1 = 3,
+ dest_NMI = 4,
+ dest_INIT = 5,
+ dest__reserved_2 = 6,
+ dest_ExtINT = 7
+};
+
#endif /* _ASM_X86_APICDEF_H */
diff --git a/trunk/arch/x86/include/asm/bootparam.h b/trunk/arch/x86/include/asm/bootparam.h
index c8bfe63a06de..e020d88ec02d 100644
--- a/trunk/arch/x86/include/asm/bootparam.h
+++ b/trunk/arch/x86/include/asm/bootparam.h
@@ -12,6 +12,7 @@
/* setup data types */
#define SETUP_NONE 0
#define SETUP_E820_EXT 1
+#define SETUP_DTB 2
/* extensible setup data list node */
struct setup_data {
diff --git a/trunk/arch/x86/include/asm/ce4100.h b/trunk/arch/x86/include/asm/ce4100.h
new file mode 100644
index 000000000000..e656ad8c0a2e
--- /dev/null
+++ b/trunk/arch/x86/include/asm/ce4100.h
@@ -0,0 +1,6 @@
+#ifndef _ASM_CE4100_H_
+#define _ASM_CE4100_H_
+
+int ce4100_pci_init(void);
+
+#endif
diff --git a/trunk/arch/x86/include/asm/cpufeature.h b/trunk/arch/x86/include/asm/cpufeature.h
index 220e2ea08e80..91f3e087cf21 100644
--- a/trunk/arch/x86/include/asm/cpufeature.h
+++ b/trunk/arch/x86/include/asm/cpufeature.h
@@ -160,6 +160,7 @@
#define X86_FEATURE_NODEID_MSR (6*32+19) /* NodeId MSR */
#define X86_FEATURE_TBM (6*32+21) /* trailing bit manipulations */
#define X86_FEATURE_TOPOEXT (6*32+22) /* topology extensions CPUID leafs */
+#define X86_FEATURE_PERFCTR_CORE (6*32+23) /* core performance counter extensions */
/*
* Auxiliary flags: Linux defined - For features scattered in various
@@ -279,6 +280,7 @@ extern const char * const x86_power_flags[32];
#define cpu_has_xsave boot_cpu_has(X86_FEATURE_XSAVE)
#define cpu_has_hypervisor boot_cpu_has(X86_FEATURE_HYPERVISOR)
#define cpu_has_pclmulqdq boot_cpu_has(X86_FEATURE_PCLMULQDQ)
+#define cpu_has_perfctr_core boot_cpu_has(X86_FEATURE_PERFCTR_CORE)
#if defined(CONFIG_X86_INVLPG) || defined(CONFIG_X86_64)
# define cpu_has_invlpg 1
diff --git a/trunk/arch/x86/include/asm/e820.h b/trunk/arch/x86/include/asm/e820.h
index e99d55d74df5..908b96957d88 100644
--- a/trunk/arch/x86/include/asm/e820.h
+++ b/trunk/arch/x86/include/asm/e820.h
@@ -96,7 +96,7 @@ extern void e820_setup_gap(void);
extern int e820_search_gap(unsigned long *gapstart, unsigned long *gapsize,
unsigned long start_addr, unsigned long long end_addr);
struct setup_data;
-extern void parse_e820_ext(struct setup_data *data, unsigned long pa_data);
+extern void parse_e820_ext(struct setup_data *data);
#if defined(CONFIG_X86_64) || \
(defined(CONFIG_X86_32) && defined(CONFIG_HIBERNATION))
diff --git a/trunk/arch/x86/include/asm/entry_arch.h b/trunk/arch/x86/include/asm/entry_arch.h
index 57650ab4a5f5..1cd6d26a0a8d 100644
--- a/trunk/arch/x86/include/asm/entry_arch.h
+++ b/trunk/arch/x86/include/asm/entry_arch.h
@@ -16,10 +16,13 @@ BUILD_INTERRUPT(call_function_single_interrupt,CALL_FUNCTION_SINGLE_VECTOR)
BUILD_INTERRUPT(irq_move_cleanup_interrupt,IRQ_MOVE_CLEANUP_VECTOR)
BUILD_INTERRUPT(reboot_interrupt,REBOOT_VECTOR)
-.irpc idx, "01234567"
+.irp idx,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15, \
+ 16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31
+.if NUM_INVALIDATE_TLB_VECTORS > \idx
BUILD_INTERRUPT3(invalidate_interrupt\idx,
(INVALIDATE_TLB_VECTOR_START)+\idx,
smp_invalidate_interrupt)
+.endif
.endr
#endif
diff --git a/trunk/arch/x86/include/asm/frame.h b/trunk/arch/x86/include/asm/frame.h
index 06850a7194e1..2c6fc9e62812 100644
--- a/trunk/arch/x86/include/asm/frame.h
+++ b/trunk/arch/x86/include/asm/frame.h
@@ -7,14 +7,12 @@
frame pointer later */
#ifdef CONFIG_FRAME_POINTER
.macro FRAME
- pushl %ebp
- CFI_ADJUST_CFA_OFFSET 4
+ pushl_cfi %ebp
CFI_REL_OFFSET ebp,0
movl %esp,%ebp
.endm
.macro ENDFRAME
- popl %ebp
- CFI_ADJUST_CFA_OFFSET -4
+ popl_cfi %ebp
CFI_RESTORE ebp
.endm
#else
diff --git a/trunk/arch/x86/include/asm/futex.h b/trunk/arch/x86/include/asm/futex.h
index 1f11ce44e956..d09bb03653f0 100644
--- a/trunk/arch/x86/include/asm/futex.h
+++ b/trunk/arch/x86/include/asm/futex.h
@@ -37,7 +37,7 @@
"+m" (*uaddr), "=&r" (tem) \
: "r" (oparg), "i" (-EFAULT), "1" (0))
-static inline int futex_atomic_op_inuser(int encoded_op, int __user *uaddr)
+static inline int futex_atomic_op_inuser(int encoded_op, u32 __user *uaddr)
{
int op = (encoded_op >> 28) & 7;
int cmp = (encoded_op >> 24) & 15;
@@ -48,7 +48,7 @@ static inline int futex_atomic_op_inuser(int encoded_op, int __user *uaddr)
if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28))
oparg = 1 << oparg;
- if (!access_ok(VERIFY_WRITE, uaddr, sizeof(int)))
+ if (!access_ok(VERIFY_WRITE, uaddr, sizeof(u32)))
return -EFAULT;
#if defined(CONFIG_X86_32) && !defined(CONFIG_X86_BSWAP)
@@ -109,9 +109,10 @@ static inline int futex_atomic_op_inuser(int encoded_op, int __user *uaddr)
return ret;
}
-static inline int futex_atomic_cmpxchg_inatomic(int __user *uaddr, int oldval,
- int newval)
+static inline int futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
+ u32 oldval, u32 newval)
{
+ int ret = 0;
#if defined(CONFIG_X86_32) && !defined(CONFIG_X86_BSWAP)
/* Real i386 machines have no cmpxchg instruction */
@@ -119,21 +120,22 @@ static inline int futex_atomic_cmpxchg_inatomic(int __user *uaddr, int oldval,
return -ENOSYS;
#endif
- if (!access_ok(VERIFY_WRITE, uaddr, sizeof(int)))
+ if (!access_ok(VERIFY_WRITE, uaddr, sizeof(u32)))
return -EFAULT;
- asm volatile("1:\t" LOCK_PREFIX "cmpxchgl %3, %1\n"
+ asm volatile("1:\t" LOCK_PREFIX "cmpxchgl %4, %2\n"
"2:\t.section .fixup, \"ax\"\n"
- "3:\tmov %2, %0\n"
+ "3:\tmov %3, %0\n"
"\tjmp 2b\n"
"\t.previous\n"
_ASM_EXTABLE(1b, 3b)
- : "=a" (oldval), "+m" (*uaddr)
- : "i" (-EFAULT), "r" (newval), "0" (oldval)
+ : "+r" (ret), "=a" (oldval), "+m" (*uaddr)
+ : "i" (-EFAULT), "r" (newval), "1" (oldval)
: "memory"
);
- return oldval;
+ *uval = oldval;
+ return ret;
}
#endif
diff --git a/trunk/arch/x86/include/asm/hw_irq.h b/trunk/arch/x86/include/asm/hw_irq.h
index 0274ec5a7e62..bb9efe8706e2 100644
--- a/trunk/arch/x86/include/asm/hw_irq.h
+++ b/trunk/arch/x86/include/asm/hw_irq.h
@@ -45,6 +45,30 @@ extern void invalidate_interrupt4(void);
extern void invalidate_interrupt5(void);
extern void invalidate_interrupt6(void);
extern void invalidate_interrupt7(void);
+extern void invalidate_interrupt8(void);
+extern void invalidate_interrupt9(void);
+extern void invalidate_interrupt10(void);
+extern void invalidate_interrupt11(void);
+extern void invalidate_interrupt12(void);
+extern void invalidate_interrupt13(void);
+extern void invalidate_interrupt14(void);
+extern void invalidate_interrupt15(void);
+extern void invalidate_interrupt16(void);
+extern void invalidate_interrupt17(void);
+extern void invalidate_interrupt18(void);
+extern void invalidate_interrupt19(void);
+extern void invalidate_interrupt20(void);
+extern void invalidate_interrupt21(void);
+extern void invalidate_interrupt22(void);
+extern void invalidate_interrupt23(void);
+extern void invalidate_interrupt24(void);
+extern void invalidate_interrupt25(void);
+extern void invalidate_interrupt26(void);
+extern void invalidate_interrupt27(void);
+extern void invalidate_interrupt28(void);
+extern void invalidate_interrupt29(void);
+extern void invalidate_interrupt30(void);
+extern void invalidate_interrupt31(void);
extern void irq_move_cleanup_interrupt(void);
extern void reboot_interrupt(void);
diff --git a/trunk/arch/x86/include/asm/init.h b/trunk/arch/x86/include/asm/init.h
index 36fb1a6a5109..8dbe353e41e1 100644
--- a/trunk/arch/x86/include/asm/init.h
+++ b/trunk/arch/x86/include/asm/init.h
@@ -11,8 +11,8 @@ kernel_physical_mapping_init(unsigned long start,
unsigned long page_size_mask);
-extern unsigned long __initdata e820_table_start;
-extern unsigned long __meminitdata e820_table_end;
-extern unsigned long __meminitdata e820_table_top;
+extern unsigned long __initdata pgt_buf_start;
+extern unsigned long __meminitdata pgt_buf_end;
+extern unsigned long __meminitdata pgt_buf_top;
#endif /* _ASM_X86_INIT_32_H */
diff --git a/trunk/arch/x86/include/asm/io_apic.h b/trunk/arch/x86/include/asm/io_apic.h
index f327d386d6cc..c4bd267dfc50 100644
--- a/trunk/arch/x86/include/asm/io_apic.h
+++ b/trunk/arch/x86/include/asm/io_apic.h
@@ -63,17 +63,6 @@ union IO_APIC_reg_03 {
} __attribute__ ((packed)) bits;
};
-enum ioapic_irq_destination_types {
- dest_Fixed = 0,
- dest_LowestPrio = 1,
- dest_SMI = 2,
- dest__reserved_1 = 3,
- dest_NMI = 4,
- dest_INIT = 5,
- dest__reserved_2 = 6,
- dest_ExtINT = 7
-};
-
struct IO_APIC_route_entry {
__u32 vector : 8,
delivery_mode : 3, /* 000: FIXED
@@ -106,6 +95,10 @@ struct IR_IO_APIC_route_entry {
index : 15;
} __attribute__ ((packed));
+#define IOAPIC_AUTO -1
+#define IOAPIC_EDGE 0
+#define IOAPIC_LEVEL 1
+
#ifdef CONFIG_X86_IO_APIC
/*
@@ -150,11 +143,6 @@ extern int timer_through_8259;
#define io_apic_assign_pci_irqs \
(mp_irq_entries && !skip_ioapic_setup && io_apic_irqs)
-extern u8 io_apic_unique_id(u8 id);
-extern int io_apic_get_unique_id(int ioapic, int apic_id);
-extern int io_apic_get_version(int ioapic);
-extern int io_apic_get_redir_entries(int ioapic);
-
struct io_apic_irq_attr;
extern int io_apic_set_pci_routing(struct device *dev, int irq,
struct io_apic_irq_attr *irq_attr);
@@ -162,6 +150,8 @@ void setup_IO_APIC_irq_extra(u32 gsi);
extern void ioapic_and_gsi_init(void);
extern void ioapic_insert_resources(void);
+int io_apic_setup_irq_pin(unsigned int irq, int node, struct io_apic_irq_attr *attr);
+
extern struct IO_APIC_route_entry **alloc_ioapic_entries(void);
extern void free_ioapic_entries(struct IO_APIC_route_entry **ioapic_entries);
extern int save_IO_APIC_setup(struct IO_APIC_route_entry **ioapic_entries);
@@ -186,6 +176,8 @@ extern void __init pre_init_apic_IRQ0(void);
extern void mp_save_irq(struct mpc_intsrc *m);
+extern void disable_ioapic_support(void);
+
#else /* !CONFIG_X86_IO_APIC */
#define io_apic_assign_pci_irqs 0
@@ -199,6 +191,26 @@ static inline int mp_find_ioapic(u32 gsi) { return 0; }
struct io_apic_irq_attr;
static inline int io_apic_set_pci_routing(struct device *dev, int irq,
struct io_apic_irq_attr *irq_attr) { return 0; }
+
+static inline struct IO_APIC_route_entry **alloc_ioapic_entries(void)
+{
+ return NULL;
+}
+
+static inline void free_ioapic_entries(struct IO_APIC_route_entry **ent) { }
+static inline int save_IO_APIC_setup(struct IO_APIC_route_entry **ent)
+{
+ return -ENOMEM;
+}
+
+static inline void mask_IO_APIC_setup(struct IO_APIC_route_entry **ent) { }
+static inline int restore_IO_APIC_setup(struct IO_APIC_route_entry **ent)
+{
+ return -ENOMEM;
+}
+
+static inline void mp_save_irq(struct mpc_intsrc *m) { };
+static inline void disable_ioapic_support(void) { }
#endif
#endif /* _ASM_X86_IO_APIC_H */
diff --git a/trunk/arch/x86/include/asm/ipi.h b/trunk/arch/x86/include/asm/ipi.h
index 0b7228268a63..615fa9061b57 100644
--- a/trunk/arch/x86/include/asm/ipi.h
+++ b/trunk/arch/x86/include/asm/ipi.h
@@ -123,10 +123,6 @@ extern void default_send_IPI_mask_sequence_phys(const struct cpumask *mask,
int vector);
extern void default_send_IPI_mask_allbutself_phys(const struct cpumask *mask,
int vector);
-extern void default_send_IPI_mask_sequence_logical(const struct cpumask *mask,
- int vector);
-extern void default_send_IPI_mask_allbutself_logical(const struct cpumask *mask,
- int vector);
/* Avoid include hell */
#define NMI_VECTOR 0x02
@@ -150,6 +146,10 @@ static inline void __default_local_send_IPI_all(int vector)
}
#ifdef CONFIG_X86_32
+extern void default_send_IPI_mask_sequence_logical(const struct cpumask *mask,
+ int vector);
+extern void default_send_IPI_mask_allbutself_logical(const struct cpumask *mask,
+ int vector);
extern void default_send_IPI_mask_logical(const struct cpumask *mask,
int vector);
extern void default_send_IPI_allbutself(int vector);
diff --git a/trunk/arch/x86/include/asm/irq.h b/trunk/arch/x86/include/asm/irq.h
index c704b38c57a2..ba870bb6dd8e 100644
--- a/trunk/arch/x86/include/asm/irq.h
+++ b/trunk/arch/x86/include/asm/irq.h
@@ -10,9 +10,6 @@
#include
#include
-/* Even though we don't support this, supply it to appease OF */
-static inline void irq_dispose_mapping(unsigned int virq) { }
-
static inline int irq_canonicalize(int irq)
{
return ((irq == 2) ? 9 : irq);
diff --git a/trunk/arch/x86/include/asm/irq_controller.h b/trunk/arch/x86/include/asm/irq_controller.h
new file mode 100644
index 000000000000..423bbbddf36d
--- /dev/null
+++ b/trunk/arch/x86/include/asm/irq_controller.h
@@ -0,0 +1,12 @@
+#ifndef __IRQ_CONTROLLER__
+#define __IRQ_CONTROLLER__
+
+struct irq_domain {
+ int (*xlate)(struct irq_domain *h, const u32 *intspec, u32 intsize,
+ u32 *out_hwirq, u32 *out_type);
+ void *priv;
+ struct device_node *controller;
+ struct list_head l;
+};
+
+#endif
diff --git a/trunk/arch/x86/include/asm/irq_vectors.h b/trunk/arch/x86/include/asm/irq_vectors.h
index 6af0894dafb4..6e976ee3b3ef 100644
--- a/trunk/arch/x86/include/asm/irq_vectors.h
+++ b/trunk/arch/x86/include/asm/irq_vectors.h
@@ -1,6 +1,7 @@
#ifndef _ASM_X86_IRQ_VECTORS_H
#define _ASM_X86_IRQ_VECTORS_H
+#include
/*
* Linux IRQ vector layout.
*
@@ -16,8 +17,8 @@
* Vectors 0 ... 31 : system traps and exceptions - hardcoded events
* Vectors 32 ... 127 : device interrupts
* Vector 128 : legacy int80 syscall interface
- * Vectors 129 ... 237 : device interrupts
- * Vectors 238 ... 255 : special interrupts
+ * Vectors 129 ... INVALIDATE_TLB_VECTOR_START-1 : device interrupts
+ * Vectors INVALIDATE_TLB_VECTOR_START ... 255 : special interrupts
*
* 64-bit x86 has per CPU IDT tables, 32-bit has one shared IDT table.
*
@@ -96,37 +97,43 @@
#define THRESHOLD_APIC_VECTOR 0xf9
#define REBOOT_VECTOR 0xf8
-/* f0-f7 used for spreading out TLB flushes: */
-#define INVALIDATE_TLB_VECTOR_END 0xf7
-#define INVALIDATE_TLB_VECTOR_START 0xf0
-#define NUM_INVALIDATE_TLB_VECTORS 8
-
-/*
- * Local APIC timer IRQ vector is on a different priority level,
- * to work around the 'lost local interrupt if more than 2 IRQ
- * sources per level' errata.
- */
-#define LOCAL_TIMER_VECTOR 0xef
-
/*
* Generic system vector for platform specific use
*/
-#define X86_PLATFORM_IPI_VECTOR 0xed
+#define X86_PLATFORM_IPI_VECTOR 0xf7
/*
* IRQ work vector:
*/
-#define IRQ_WORK_VECTOR 0xec
+#define IRQ_WORK_VECTOR 0xf6
-#define UV_BAU_MESSAGE 0xea
+#define UV_BAU_MESSAGE 0xf5
/*
* Self IPI vector for machine checks
*/
-#define MCE_SELF_VECTOR 0xeb
+#define MCE_SELF_VECTOR 0xf4
/* Xen vector callback to receive events in a HVM domain */
-#define XEN_HVM_EVTCHN_CALLBACK 0xe9
+#define XEN_HVM_EVTCHN_CALLBACK 0xf3
+
+/*
+ * Local APIC timer IRQ vector is on a different priority level,
+ * to work around the 'lost local interrupt if more than 2 IRQ
+ * sources per level' errata.
+ */
+#define LOCAL_TIMER_VECTOR 0xef
+
+/* up to 32 vectors used for spreading out TLB flushes: */
+#if NR_CPUS <= 32
+# define NUM_INVALIDATE_TLB_VECTORS (NR_CPUS)
+#else
+# define NUM_INVALIDATE_TLB_VECTORS (32)
+#endif
+
+#define INVALIDATE_TLB_VECTOR_END (0xee)
+#define INVALIDATE_TLB_VECTOR_START \
+ (INVALIDATE_TLB_VECTOR_END-NUM_INVALIDATE_TLB_VECTORS+1)
#define NR_VECTORS 256
diff --git a/trunk/arch/x86/include/asm/kdebug.h b/trunk/arch/x86/include/asm/kdebug.h
index ca242d35e873..518bbbb9ee59 100644
--- a/trunk/arch/x86/include/asm/kdebug.h
+++ b/trunk/arch/x86/include/asm/kdebug.h
@@ -13,7 +13,6 @@ enum die_val {
DIE_PANIC,
DIE_NMI,
DIE_DIE,
- DIE_NMIWATCHDOG,
DIE_KERNELDEBUG,
DIE_TRAP,
DIE_GPF,
diff --git a/trunk/arch/x86/include/asm/mpspec.h b/trunk/arch/x86/include/asm/mpspec.h
index 0c90dd9f0505..9c7d95f6174b 100644
--- a/trunk/arch/x86/include/asm/mpspec.h
+++ b/trunk/arch/x86/include/asm/mpspec.h
@@ -25,7 +25,6 @@ extern int pic_mode;
#define MAX_IRQ_SOURCES 256
extern unsigned int def_to_bigsmp;
-extern u8 apicid_2_node[];
#ifdef CONFIG_X86_NUMAQ
extern int mp_bus_id_to_node[MAX_MP_BUSSES];
@@ -33,8 +32,6 @@ extern int mp_bus_id_to_local[MAX_MP_BUSSES];
extern int quad_local_to_mp_bus_id [NR_CPUS/4][4];
#endif
-#define MAX_APICID 256
-
#else /* CONFIG_X86_64: */
#define MAX_MP_BUSSES 256
diff --git a/trunk/arch/x86/include/asm/msr-index.h b/trunk/arch/x86/include/asm/msr-index.h
index 4d0dfa0d998e..823d48223400 100644
--- a/trunk/arch/x86/include/asm/msr-index.h
+++ b/trunk/arch/x86/include/asm/msr-index.h
@@ -36,6 +36,11 @@
#define MSR_IA32_PERFCTR1 0x000000c2
#define MSR_FSB_FREQ 0x000000cd
+#define MSR_NHM_SNB_PKG_CST_CFG_CTL 0x000000e2
+#define NHM_C3_AUTO_DEMOTE (1UL << 25)
+#define NHM_C1_AUTO_DEMOTE (1UL << 26)
+#define ATM_LNC_C6_AUTO_DEMOTE (1UL << 25)
+
#define MSR_MTRRcap 0x000000fe
#define MSR_IA32_BBL_CR_CTL 0x00000119
@@ -47,6 +52,9 @@
#define MSR_IA32_MCG_STATUS 0x0000017a
#define MSR_IA32_MCG_CTL 0x0000017b
+#define MSR_OFFCORE_RSP_0 0x000001a6
+#define MSR_OFFCORE_RSP_1 0x000001a7
+
#define MSR_IA32_PEBS_ENABLE 0x000003f1
#define MSR_IA32_DS_AREA 0x00000600
#define MSR_IA32_PERF_CAPABILITIES 0x00000345
diff --git a/trunk/arch/x86/include/asm/nmi.h b/trunk/arch/x86/include/asm/nmi.h
index c76f5b92b840..07f46016d3ff 100644
--- a/trunk/arch/x86/include/asm/nmi.h
+++ b/trunk/arch/x86/include/asm/nmi.h
@@ -7,7 +7,6 @@
#ifdef CONFIG_X86_LOCAL_APIC
-extern void die_nmi(char *str, struct pt_regs *regs, int do_panic);
extern int avail_to_resrv_perfctr_nmi_bit(unsigned int);
extern int reserve_perfctr_nmi(unsigned int);
extern void release_perfctr_nmi(unsigned int);
diff --git a/trunk/arch/x86/include/asm/numa.h b/trunk/arch/x86/include/asm/numa.h
index 27da400d3138..3d4dab43c994 100644
--- a/trunk/arch/x86/include/asm/numa.h
+++ b/trunk/arch/x86/include/asm/numa.h
@@ -1,5 +1,57 @@
+#ifndef _ASM_X86_NUMA_H
+#define _ASM_X86_NUMA_H
+
+#include
+#include
+
+#ifdef CONFIG_NUMA
+
+#define NR_NODE_MEMBLKS (MAX_NUMNODES*2)
+
+/*
+ * __apicid_to_node[] stores the raw mapping between physical apicid and
+ * node and is used to initialize cpu_to_node mapping.
+ *
+ * The mapping may be overridden by apic->numa_cpu_node() on 32bit and thus
+ * should be accessed by the accessors - set_apicid_to_node() and
+ * numa_cpu_node().
+ */
+extern s16 __apicid_to_node[MAX_LOCAL_APIC];
+
+static inline void set_apicid_to_node(int apicid, s16 node)
+{
+ __apicid_to_node[apicid] = node;
+}
+#else /* CONFIG_NUMA */
+static inline void set_apicid_to_node(int apicid, s16 node)
+{
+}
+#endif /* CONFIG_NUMA */
+
#ifdef CONFIG_X86_32
# include "numa_32.h"
#else
# include "numa_64.h"
#endif
+
+#ifdef CONFIG_NUMA
+extern void __cpuinit numa_set_node(int cpu, int node);
+extern void __cpuinit numa_clear_node(int cpu);
+extern void __init numa_init_array(void);
+extern void __init init_cpu_to_node(void);
+extern void __cpuinit numa_add_cpu(int cpu);
+extern void __cpuinit numa_remove_cpu(int cpu);
+#else /* CONFIG_NUMA */
+static inline void numa_set_node(int cpu, int node) { }
+static inline void numa_clear_node(int cpu) { }
+static inline void numa_init_array(void) { }
+static inline void init_cpu_to_node(void) { }
+static inline void numa_add_cpu(int cpu) { }
+static inline void numa_remove_cpu(int cpu) { }
+#endif /* CONFIG_NUMA */
+
+#ifdef CONFIG_DEBUG_PER_CPU_MAPS
+struct cpumask __cpuinit *debug_cpumask_set_cpu(int cpu, int enable);
+#endif
+
+#endif /* _ASM_X86_NUMA_H */
diff --git a/trunk/arch/x86/include/asm/numa_32.h b/trunk/arch/x86/include/asm/numa_32.h
index b0ef2b449a9d..c6beed1ef103 100644
--- a/trunk/arch/x86/include/asm/numa_32.h
+++ b/trunk/arch/x86/include/asm/numa_32.h
@@ -4,7 +4,12 @@
extern int numa_off;
extern int pxm_to_nid(int pxm);
-extern void numa_remove_cpu(int cpu);
+
+#ifdef CONFIG_NUMA
+extern int __cpuinit numa_cpu_node(int cpu);
+#else /* CONFIG_NUMA */
+static inline int numa_cpu_node(int cpu) { return NUMA_NO_NODE; }
+#endif /* CONFIG_NUMA */
#ifdef CONFIG_HIGHMEM
extern void set_highmem_pages_init(void);
diff --git a/trunk/arch/x86/include/asm/numa_64.h b/trunk/arch/x86/include/asm/numa_64.h
index 0493be39607c..344eb1790b46 100644
--- a/trunk/arch/x86/include/asm/numa_64.h
+++ b/trunk/arch/x86/include/asm/numa_64.h
@@ -2,23 +2,16 @@
#define _ASM_X86_NUMA_64_H
#include
-#include
struct bootnode {
u64 start;
u64 end;
};
-extern int compute_hash_shift(struct bootnode *nodes, int numblks,
- int *nodeids);
-
#define ZONE_ALIGN (1UL << (MAX_ORDER+PAGE_SHIFT))
-extern void numa_init_array(void);
extern int numa_off;
-extern s16 apicid_to_node[MAX_LOCAL_APIC];
-
extern unsigned long numa_free_all_bootmem(void);
extern void setup_node_bootmem(int nodeid, unsigned long start,
unsigned long end);
@@ -31,11 +24,11 @@ extern void setup_node_bootmem(int nodeid, unsigned long start,
*/
#define NODE_MIN_SIZE (4*1024*1024)
-extern void __init init_cpu_to_node(void);
-extern void __cpuinit numa_set_node(int cpu, int node);
-extern void __cpuinit numa_clear_node(int cpu);
-extern void __cpuinit numa_add_cpu(int cpu);
-extern void __cpuinit numa_remove_cpu(int cpu);
+extern nodemask_t numa_nodes_parsed __initdata;
+
+extern int __cpuinit numa_cpu_node(int cpu);
+extern int __init numa_add_memblk(int nodeid, u64 start, u64 end);
+extern void __init numa_set_distance(int from, int to, int distance);
#ifdef CONFIG_NUMA_EMU
#define FAKE_NODE_MIN_SIZE ((u64)32 << 20)
@@ -43,11 +36,7 @@ extern void __cpuinit numa_remove_cpu(int cpu);
void numa_emu_cmdline(char *);
#endif /* CONFIG_NUMA_EMU */
#else
-static inline void init_cpu_to_node(void) { }
-static inline void numa_set_node(int cpu, int node) { }
-static inline void numa_clear_node(int cpu) { }
-static inline void numa_add_cpu(int cpu, int node) { }
-static inline void numa_remove_cpu(int cpu) { }
+static inline int numa_cpu_node(int cpu) { return NUMA_NO_NODE; }
#endif
#endif /* _ASM_X86_NUMA_64_H */
diff --git a/trunk/arch/x86/include/asm/olpc_ofw.h b/trunk/arch/x86/include/asm/olpc_ofw.h
index 641988efe063..c5d3a5abbb9f 100644
--- a/trunk/arch/x86/include/asm/olpc_ofw.h
+++ b/trunk/arch/x86/include/asm/olpc_ofw.h
@@ -6,7 +6,7 @@
#define OLPC_OFW_SIG 0x2057464F /* aka "OFW " */
-#ifdef CONFIG_OLPC_OPENFIRMWARE
+#ifdef CONFIG_OLPC
extern bool olpc_ofw_is_installed(void);
@@ -26,19 +26,15 @@ extern void setup_olpc_ofw_pgd(void);
/* check if OFW was detected during boot */
extern bool olpc_ofw_present(void);
-#else /* !CONFIG_OLPC_OPENFIRMWARE */
-
-static inline bool olpc_ofw_is_installed(void) { return false; }
+#else /* !CONFIG_OLPC */
static inline void olpc_ofw_detect(void) { }
static inline void setup_olpc_ofw_pgd(void) { }
-static inline bool olpc_ofw_present(void) { return false; }
-
-#endif /* !CONFIG_OLPC_OPENFIRMWARE */
+#endif /* !CONFIG_OLPC */
-#ifdef CONFIG_OLPC_OPENFIRMWARE_DT
+#ifdef CONFIG_OF_PROMTREE
extern void olpc_dt_build_devicetree(void);
#else
static inline void olpc_dt_build_devicetree(void) { }
-#endif /* CONFIG_OLPC_OPENFIRMWARE_DT */
+#endif
#endif /* _ASM_X86_OLPC_OFW_H */
diff --git a/trunk/arch/x86/include/asm/page_types.h b/trunk/arch/x86/include/asm/page_types.h
index 1df66211fd1b..bce688d54c12 100644
--- a/trunk/arch/x86/include/asm/page_types.h
+++ b/trunk/arch/x86/include/asm/page_types.h
@@ -2,6 +2,7 @@
#define _ASM_X86_PAGE_DEFS_H
#include
+#include
/* PAGE_SHIFT determines the page size */
#define PAGE_SHIFT 12
@@ -45,11 +46,15 @@ extern int devmem_is_allowed(unsigned long pagenr);
extern unsigned long max_low_pfn_mapped;
extern unsigned long max_pfn_mapped;
+static inline phys_addr_t get_max_mapped(void)
+{
+ return (phys_addr_t)max_pfn_mapped << PAGE_SHIFT;
+}
+
extern unsigned long init_memory_mapping(unsigned long start,
unsigned long end);
-extern void initmem_init(unsigned long start_pfn, unsigned long end_pfn,
- int acpi, int k8);
+extern void initmem_init(void);
extern void free_initmem(void);
#endif /* !__ASSEMBLY__ */
diff --git a/trunk/arch/x86/include/asm/percpu.h b/trunk/arch/x86/include/asm/percpu.h
index 7e172955ee57..a09e1f052d84 100644
--- a/trunk/arch/x86/include/asm/percpu.h
+++ b/trunk/arch/x86/include/asm/percpu.h
@@ -451,6 +451,26 @@ do { \
#define irqsafe_cpu_cmpxchg_4(pcp, oval, nval) percpu_cmpxchg_op(pcp, oval, nval)
#endif /* !CONFIG_M386 */
+#ifdef CONFIG_X86_CMPXCHG64
+#define percpu_cmpxchg8b_double(pcp1, o1, o2, n1, n2) \
+({ \
+ char __ret; \
+ typeof(o1) __o1 = o1; \
+ typeof(o1) __n1 = n1; \
+ typeof(o2) __o2 = o2; \
+ typeof(o2) __n2 = n2; \
+ typeof(o2) __dummy = n2; \
+ asm volatile("cmpxchg8b "__percpu_arg(1)"\n\tsetz %0\n\t" \
+ : "=a"(__ret), "=m" (pcp1), "=d"(__dummy) \
+ : "b"(__n1), "c"(__n2), "a"(__o1), "d"(__o2)); \
+ __ret; \
+})
+
+#define __this_cpu_cmpxchg_double_4(pcp1, pcp2, o1, o2, n1, n2) percpu_cmpxchg8b_double(pcp1, o1, o2, n1, n2)
+#define this_cpu_cmpxchg_double_4(pcp1, pcp2, o1, o2, n1, n2) percpu_cmpxchg8b_double(pcp1, o1, o2, n1, n2)
+#define irqsafe_cpu_cmpxchg_double_4(pcp1, pcp2, o1, o2, n1, n2) percpu_cmpxchg8b_double(pcp1, o1, o2, n1, n2)
+#endif /* CONFIG_X86_CMPXCHG64 */
+
/*
* Per cpu atomic 64 bit operations are only available under 64 bit.
* 32 bit must fall back to generic operations.
@@ -480,6 +500,34 @@ do { \
#define irqsafe_cpu_xor_8(pcp, val) percpu_to_op("xor", (pcp), val)
#define irqsafe_cpu_xchg_8(pcp, nval) percpu_xchg_op(pcp, nval)
#define irqsafe_cpu_cmpxchg_8(pcp, oval, nval) percpu_cmpxchg_op(pcp, oval, nval)
+
+/*
+ * Pretty complex macro to generate cmpxchg16 instruction. The instruction
+ * is not supported on early AMD64 processors so we must be able to emulate
+ * it in software. The address used in the cmpxchg16 instruction must be
+ * aligned to a 16 byte boundary.
+ */
+#define percpu_cmpxchg16b_double(pcp1, o1, o2, n1, n2) \
+({ \
+ char __ret; \
+ typeof(o1) __o1 = o1; \
+ typeof(o1) __n1 = n1; \
+ typeof(o2) __o2 = o2; \
+ typeof(o2) __n2 = n2; \
+ typeof(o2) __dummy; \
+ alternative_io("call this_cpu_cmpxchg16b_emu\n\t" P6_NOP4, \
+ "cmpxchg16b %%gs:(%%rsi)\n\tsetz %0\n\t", \
+ X86_FEATURE_CX16, \
+ ASM_OUTPUT2("=a"(__ret), "=d"(__dummy)), \
+ "S" (&pcp1), "b"(__n1), "c"(__n2), \
+ "a"(__o1), "d"(__o2)); \
+ __ret; \
+})
+
+#define __this_cpu_cmpxchg_double_8(pcp1, pcp2, o1, o2, n1, n2) percpu_cmpxchg16b_double(pcp1, o1, o2, n1, n2)
+#define this_cpu_cmpxchg_double_8(pcp1, pcp2, o1, o2, n1, n2) percpu_cmpxchg16b_double(pcp1, o1, o2, n1, n2)
+#define irqsafe_cpu_cmpxchg_double_8(pcp1, pcp2, o1, o2, n1, n2) percpu_cmpxchg16b_double(pcp1, o1, o2, n1, n2)
+
#endif
/* This is not atomic against other CPUs -- CPU preemption needs to be off */
diff --git a/trunk/arch/x86/include/asm/perf_event_p4.h b/trunk/arch/x86/include/asm/perf_event_p4.h
index e2f6a99f14ab..cc29086e30cd 100644
--- a/trunk/arch/x86/include/asm/perf_event_p4.h
+++ b/trunk/arch/x86/include/asm/perf_event_p4.h
@@ -22,6 +22,7 @@
#define ARCH_P4_CNTRVAL_BITS (40)
#define ARCH_P4_CNTRVAL_MASK ((1ULL << ARCH_P4_CNTRVAL_BITS) - 1)
+#define ARCH_P4_UNFLAGGED_BIT ((1ULL) << (ARCH_P4_CNTRVAL_BITS - 1))
#define P4_ESCR_EVENT_MASK 0x7e000000U
#define P4_ESCR_EVENT_SHIFT 25
diff --git a/trunk/arch/x86/include/asm/processor.h b/trunk/arch/x86/include/asm/processor.h
index 45636cefa186..4c25ab48257b 100644
--- a/trunk/arch/x86/include/asm/processor.h
+++ b/trunk/arch/x86/include/asm/processor.h
@@ -94,10 +94,6 @@ struct cpuinfo_x86 {
int x86_cache_alignment; /* In bytes */
int x86_power;
unsigned long loops_per_jiffy;
-#ifdef CONFIG_SMP
- /* cpus sharing the last level cache: */
- cpumask_var_t llc_shared_map;
-#endif
/* cpuid returned max cores value: */
u16 x86_max_cores;
u16 apicid;
diff --git a/trunk/arch/x86/include/asm/prom.h b/trunk/arch/x86/include/asm/prom.h
index b4ec95f07518..971e0b46446e 100644
--- a/trunk/arch/x86/include/asm/prom.h
+++ b/trunk/arch/x86/include/asm/prom.h
@@ -1 +1,69 @@
-/* dummy prom.h; here to make linux/of.h's #includes happy */
+/*
+ * Definitions for Device tree / OpenFirmware handling on X86
+ *
+ * based on arch/powerpc/include/asm/prom.h which is
+ * Copyright (C) 1996-2005 Paul Mackerras.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#ifndef _ASM_X86_PROM_H
+#define _ASM_X86_PROM_H
+#ifndef __ASSEMBLY__
+
+#include
+#include
+#include
+
+#include
+#include
+#include
+#include
+
+#ifdef CONFIG_OF
+extern int of_ioapic;
+extern u64 initial_dtb;
+extern void add_dtb(u64 data);
+extern void x86_add_irq_domains(void);
+void __cpuinit x86_of_pci_init(void);
+void x86_dtb_init(void);
+
+static inline struct device_node *pci_device_to_OF_node(struct pci_dev *pdev)
+{
+ return pdev ? pdev->dev.of_node : NULL;
+}
+
+static inline struct device_node *pci_bus_to_OF_node(struct pci_bus *bus)
+{
+ return pci_device_to_OF_node(bus->self);
+}
+
+#else
+static inline void add_dtb(u64 data) { }
+static inline void x86_add_irq_domains(void) { }
+static inline void x86_of_pci_init(void) { }
+static inline void x86_dtb_init(void) { }
+#define of_ioapic 0
+#endif
+
+extern char cmd_line[COMMAND_LINE_SIZE];
+
+#define pci_address_to_pio pci_address_to_pio
+unsigned long pci_address_to_pio(phys_addr_t addr);
+
+/**
+ * irq_dispose_mapping - Unmap an interrupt
+ * @virq: linux virq number of the interrupt to unmap
+ *
+ * FIXME: We really should implement proper virq handling like power,
+ * but that's going to be major surgery.
+ */
+static inline void irq_dispose_mapping(unsigned int virq) { }
+
+#define HAVE_ARCH_DEVTREE_FIXUPS
+
+#endif /* __ASSEMBLY__ */
+#endif
diff --git a/trunk/arch/x86/include/asm/rwsem.h b/trunk/arch/x86/include/asm/rwsem.h
index d1e41b0f9b60..df4cd32b4cc6 100644
--- a/trunk/arch/x86/include/asm/rwsem.h
+++ b/trunk/arch/x86/include/asm/rwsem.h
@@ -37,26 +37,9 @@
#endif
#ifdef __KERNEL__
-
-#include
-#include
-#include
#include
-struct rwsem_waiter;
-
-extern asmregparm struct rw_semaphore *
- rwsem_down_read_failed(struct rw_semaphore *sem);
-extern asmregparm struct rw_semaphore *
- rwsem_down_write_failed(struct rw_semaphore *sem);
-extern asmregparm struct rw_semaphore *
- rwsem_wake(struct rw_semaphore *);
-extern asmregparm struct rw_semaphore *
- rwsem_downgrade_wake(struct rw_semaphore *sem);
-
/*
- * the semaphore definition
- *
* The bias values and the counter type limits the number of
* potential readers/writers to 32767 for 32 bits and 2147483647
* for 64 bits.
@@ -74,43 +57,6 @@ extern asmregparm struct rw_semaphore *
#define RWSEM_ACTIVE_READ_BIAS RWSEM_ACTIVE_BIAS
#define RWSEM_ACTIVE_WRITE_BIAS (RWSEM_WAITING_BIAS + RWSEM_ACTIVE_BIAS)
-typedef signed long rwsem_count_t;
-
-struct rw_semaphore {
- rwsem_count_t count;
- spinlock_t wait_lock;
- struct list_head wait_list;
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
- struct lockdep_map dep_map;
-#endif
-};
-
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
-# define __RWSEM_DEP_MAP_INIT(lockname) , .dep_map = { .name = #lockname }
-#else
-# define __RWSEM_DEP_MAP_INIT(lockname)
-#endif
-
-
-#define __RWSEM_INITIALIZER(name) \
-{ \
- RWSEM_UNLOCKED_VALUE, __SPIN_LOCK_UNLOCKED((name).wait_lock), \
- LIST_HEAD_INIT((name).wait_list) __RWSEM_DEP_MAP_INIT(name) \
-}
-
-#define DECLARE_RWSEM(name) \
- struct rw_semaphore name = __RWSEM_INITIALIZER(name)
-
-extern void __init_rwsem(struct rw_semaphore *sem, const char *name,
- struct lock_class_key *key);
-
-#define init_rwsem(sem) \
-do { \
- static struct lock_class_key __key; \
- \
- __init_rwsem((sem), #sem, &__key); \
-} while (0)
-
/*
* lock for reading
*/
@@ -133,7 +79,7 @@ static inline void __down_read(struct rw_semaphore *sem)
*/
static inline int __down_read_trylock(struct rw_semaphore *sem)
{
- rwsem_count_t result, tmp;
+ long result, tmp;
asm volatile("# beginning __down_read_trylock\n\t"
" mov %0,%1\n\t"
"1:\n\t"
@@ -155,7 +101,7 @@ static inline int __down_read_trylock(struct rw_semaphore *sem)
*/
static inline void __down_write_nested(struct rw_semaphore *sem, int subclass)
{
- rwsem_count_t tmp;
+ long tmp;
asm volatile("# beginning down_write\n\t"
LOCK_PREFIX " xadd %1,(%2)\n\t"
/* adds 0xffff0001, returns the old value */
@@ -180,9 +126,8 @@ static inline void __down_write(struct rw_semaphore *sem)
*/
static inline int __down_write_trylock(struct rw_semaphore *sem)
{
- rwsem_count_t ret = cmpxchg(&sem->count,
- RWSEM_UNLOCKED_VALUE,
- RWSEM_ACTIVE_WRITE_BIAS);
+ long ret = cmpxchg(&sem->count, RWSEM_UNLOCKED_VALUE,
+ RWSEM_ACTIVE_WRITE_BIAS);
if (ret == RWSEM_UNLOCKED_VALUE)
return 1;
return 0;
@@ -193,7 +138,7 @@ static inline int __down_write_trylock(struct rw_semaphore *sem)
*/
static inline void __up_read(struct rw_semaphore *sem)
{
- rwsem_count_t tmp;
+ long tmp;
asm volatile("# beginning __up_read\n\t"
LOCK_PREFIX " xadd %1,(%2)\n\t"
/* subtracts 1, returns the old value */
@@ -211,7 +156,7 @@ static inline void __up_read(struct rw_semaphore *sem)
*/
static inline void __up_write(struct rw_semaphore *sem)
{
- rwsem_count_t tmp;
+ long tmp;
asm volatile("# beginning __up_write\n\t"
LOCK_PREFIX " xadd %1,(%2)\n\t"
/* subtracts 0xffff0001, returns the old value */
@@ -247,8 +192,7 @@ static inline void __downgrade_write(struct rw_semaphore *sem)
/*
* implement atomic add functionality
*/
-static inline void rwsem_atomic_add(rwsem_count_t delta,
- struct rw_semaphore *sem)
+static inline void rwsem_atomic_add(long delta, struct rw_semaphore *sem)
{
asm volatile(LOCK_PREFIX _ASM_ADD "%1,%0"
: "+m" (sem->count)
@@ -258,10 +202,9 @@ static inline void rwsem_atomic_add(rwsem_count_t delta,
/*
* implement exchange and add functionality
*/
-static inline rwsem_count_t rwsem_atomic_update(rwsem_count_t delta,
- struct rw_semaphore *sem)
+static inline long rwsem_atomic_update(long delta, struct rw_semaphore *sem)
{
- rwsem_count_t tmp = delta;
+ long tmp = delta;
asm volatile(LOCK_PREFIX "xadd %0,%1"
: "+r" (tmp), "+m" (sem->count)
@@ -270,10 +213,5 @@ static inline rwsem_count_t rwsem_atomic_update(rwsem_count_t delta,
return tmp + delta;
}
-static inline int rwsem_is_locked(struct rw_semaphore *sem)
-{
- return (sem->count != 0);
-}
-
#endif /* __KERNEL__ */
#endif /* _ASM_X86_RWSEM_H */
diff --git a/trunk/arch/x86/include/asm/smp.h b/trunk/arch/x86/include/asm/smp.h
index 1f4695136776..73b11bc0ae6f 100644
--- a/trunk/arch/x86/include/asm/smp.h
+++ b/trunk/arch/x86/include/asm/smp.h
@@ -17,12 +17,24 @@
#endif
#include
#include
+#include
extern int smp_num_siblings;
extern unsigned int num_processors;
+static inline bool cpu_has_ht_siblings(void)
+{
+ bool has_siblings = false;
+#ifdef CONFIG_SMP
+ has_siblings = cpu_has_ht && smp_num_siblings > 1;
+#endif
+ return has_siblings;
+}
+
DECLARE_PER_CPU(cpumask_var_t, cpu_sibling_map);
DECLARE_PER_CPU(cpumask_var_t, cpu_core_map);
+/* cpus sharing the last level cache: */
+DECLARE_PER_CPU(cpumask_var_t, cpu_llc_shared_map);
DECLARE_PER_CPU(u16, cpu_llc_id);
DECLARE_PER_CPU(int, cpu_number);
@@ -36,8 +48,16 @@ static inline struct cpumask *cpu_core_mask(int cpu)
return per_cpu(cpu_core_map, cpu);
}
+static inline struct cpumask *cpu_llc_shared_mask(int cpu)
+{
+ return per_cpu(cpu_llc_shared_map, cpu);
+}
+
DECLARE_EARLY_PER_CPU(u16, x86_cpu_to_apicid);
DECLARE_EARLY_PER_CPU(u16, x86_bios_cpu_apicid);
+#if defined(CONFIG_X86_LOCAL_APIC) && defined(CONFIG_X86_32)
+DECLARE_EARLY_PER_CPU(int, x86_cpu_to_logical_apicid);
+#endif
/* Static state in head.S used to set up a CPU */
extern unsigned long stack_start; /* Initial stack pointer address */
diff --git a/trunk/arch/x86/include/asm/smpboot_hooks.h b/trunk/arch/x86/include/asm/smpboot_hooks.h
index 6c22bf353f26..725b77831993 100644
--- a/trunk/arch/x86/include/asm/smpboot_hooks.h
+++ b/trunk/arch/x86/include/asm/smpboot_hooks.h
@@ -34,7 +34,7 @@ static inline void smpboot_restore_warm_reset_vector(void)
*/
CMOS_WRITE(0, 0xf);
- *((volatile long *)phys_to_virt(apic->trampoline_phys_low)) = 0;
+ *((volatile u32 *)phys_to_virt(apic->trampoline_phys_low)) = 0;
}
static inline void __init smpboot_setup_io_apic(void)
diff --git a/trunk/arch/x86/include/asm/system.h b/trunk/arch/x86/include/asm/system.h
index 33ecc3ea8782..12569e691ce3 100644
--- a/trunk/arch/x86/include/asm/system.h
+++ b/trunk/arch/x86/include/asm/system.h
@@ -98,8 +98,6 @@ do { \
*/
#define HAVE_DISABLE_HLT
#else
-#define __SAVE(reg, offset) "movq %%" #reg ",(14-" #offset ")*8(%%rsp)\n\t"
-#define __RESTORE(reg, offset) "movq (14-" #offset ")*8(%%rsp),%%" #reg "\n\t"
/* frame pointer must be last for get_wchan */
#define SAVE_CONTEXT "pushf ; pushq %%rbp ; movq %%rsi,%%rbp\n\t"
diff --git a/trunk/arch/x86/include/asm/topology.h b/trunk/arch/x86/include/asm/topology.h
index 21899cc31e52..910a7084f7f2 100644
--- a/trunk/arch/x86/include/asm/topology.h
+++ b/trunk/arch/x86/include/asm/topology.h
@@ -47,21 +47,6 @@
#include
-#ifdef CONFIG_X86_32
-
-/* Mappings between logical cpu number and node number */
-extern int cpu_to_node_map[];
-
-/* Returns the number of the node containing CPU 'cpu' */
-static inline int __cpu_to_node(int cpu)
-{
- return cpu_to_node_map[cpu];
-}
-#define early_cpu_to_node __cpu_to_node
-#define cpu_to_node __cpu_to_node
-
-#else /* CONFIG_X86_64 */
-
/* Mappings between logical cpu number and node number */
DECLARE_EARLY_PER_CPU(int, x86_cpu_to_node_map);
@@ -84,8 +69,6 @@ static inline int early_cpu_to_node(int cpu)
#endif /* !CONFIG_DEBUG_PER_CPU_MAPS */
-#endif /* CONFIG_X86_64 */
-
/* Mappings between node number and cpus on that node. */
extern cpumask_var_t node_to_cpumask_map[MAX_NUMNODES];
@@ -155,7 +138,7 @@ extern unsigned long node_remap_size[];
.balance_interval = 1, \
}
-#ifdef CONFIG_X86_64_ACPI_NUMA
+#ifdef CONFIG_X86_64
extern int __node_distance(int, int);
#define node_distance(a, b) __node_distance(a, b)
#endif
diff --git a/trunk/arch/x86/include/asm/unistd_32.h b/trunk/arch/x86/include/asm/unistd_32.h
index b766a5e8ba0e..ffaf183c619a 100644
--- a/trunk/arch/x86/include/asm/unistd_32.h
+++ b/trunk/arch/x86/include/asm/unistd_32.h
@@ -346,10 +346,13 @@
#define __NR_fanotify_init 338
#define __NR_fanotify_mark 339
#define __NR_prlimit64 340
+#define __NR_name_to_handle_at 341
+#define __NR_open_by_handle_at 342
+#define __NR_clock_adjtime 343
#ifdef __KERNEL__
-#define NR_syscalls 341
+#define NR_syscalls 344
#define __ARCH_WANT_IPC_PARSE_VERSION
#define __ARCH_WANT_OLD_READDIR
diff --git a/trunk/arch/x86/include/asm/unistd_64.h b/trunk/arch/x86/include/asm/unistd_64.h
index 363e9b8a715b..5466bea670e7 100644
--- a/trunk/arch/x86/include/asm/unistd_64.h
+++ b/trunk/arch/x86/include/asm/unistd_64.h
@@ -669,6 +669,12 @@ __SYSCALL(__NR_fanotify_init, sys_fanotify_init)
__SYSCALL(__NR_fanotify_mark, sys_fanotify_mark)
#define __NR_prlimit64 302
__SYSCALL(__NR_prlimit64, sys_prlimit64)
+#define __NR_name_to_handle_at 303
+__SYSCALL(__NR_name_to_handle_at, sys_name_to_handle_at)
+#define __NR_open_by_handle_at 304
+__SYSCALL(__NR_open_by_handle_at, sys_open_by_handle_at)
+#define __NR_clock_adjtime 305
+__SYSCALL(__NR_clock_adjtime, sys_clock_adjtime)
#ifndef __NO_STUBS
#define __ARCH_WANT_OLD_READDIR
diff --git a/trunk/arch/x86/include/asm/uv/uv_bau.h b/trunk/arch/x86/include/asm/uv/uv_bau.h
index ce1d54c8a433..3e094af443c3 100644
--- a/trunk/arch/x86/include/asm/uv/uv_bau.h
+++ b/trunk/arch/x86/include/asm/uv/uv_bau.h
@@ -176,7 +176,7 @@ struct bau_msg_payload {
struct bau_msg_header {
unsigned int dest_subnodeid:6; /* must be 0x10, for the LB */
/* bits 5:0 */
- unsigned int base_dest_nodeid:15; /* nasid (pnode<<1) of */
+ unsigned int base_dest_nodeid:15; /* nasid of the */
/* bits 20:6 */ /* first bit in uvhub map */
unsigned int command:8; /* message type */
/* bits 28:21 */
diff --git a/trunk/arch/x86/include/asm/x86_init.h b/trunk/arch/x86/include/asm/x86_init.h
index 64642ad019fb..643ebf2e2ad8 100644
--- a/trunk/arch/x86/include/asm/x86_init.h
+++ b/trunk/arch/x86/include/asm/x86_init.h
@@ -83,11 +83,13 @@ struct x86_init_paging {
* boot cpu
* @tsc_pre_init: platform function called before TSC init
* @timer_init: initialize the platform timer (default PIT/HPET)
+ * @wallclock_init: init the wallclock device
*/
struct x86_init_timers {
void (*setup_percpu_clockev)(void);
void (*tsc_pre_init)(void);
void (*timer_init)(void);
+ void (*wallclock_init)(void);
};
/**
diff --git a/trunk/arch/x86/include/asm/xen/hypercall.h b/trunk/arch/x86/include/asm/xen/hypercall.h
index a3c28ae4025b..8508bfe52296 100644
--- a/trunk/arch/x86/include/asm/xen/hypercall.h
+++ b/trunk/arch/x86/include/asm/xen/hypercall.h
@@ -287,7 +287,7 @@ HYPERVISOR_fpu_taskswitch(int set)
static inline int
HYPERVISOR_sched_op(int cmd, void *arg)
{
- return _hypercall2(int, sched_op_new, cmd, arg);
+ return _hypercall2(int, sched_op, cmd, arg);
}
static inline long
@@ -422,10 +422,17 @@ HYPERVISOR_set_segment_base(int reg, unsigned long value)
#endif
static inline int
-HYPERVISOR_suspend(unsigned long srec)
+HYPERVISOR_suspend(unsigned long start_info_mfn)
{
- return _hypercall3(int, sched_op, SCHEDOP_shutdown,
- SHUTDOWN_suspend, srec);
+ struct sched_shutdown r = { .reason = SHUTDOWN_suspend };
+
+ /*
+ * For a PV guest the tools require that the start_info mfn be
+ * present in rdx/edx when the hypercall is made. Per the
+ * hypercall calling convention this is the third hypercall
+ * argument, which is start_info_mfn here.
+ */
+ return _hypercall3(int, sched_op, SCHEDOP_shutdown, &r, start_info_mfn);
}
static inline int
diff --git a/trunk/arch/x86/include/asm/xen/page.h b/trunk/arch/x86/include/asm/xen/page.h
index f25bdf238a33..c61934fbf22a 100644
--- a/trunk/arch/x86/include/asm/xen/page.h
+++ b/trunk/arch/x86/include/asm/xen/page.h
@@ -29,8 +29,10 @@ typedef struct xpaddr {
/**** MACHINE <-> PHYSICAL CONVERSION MACROS ****/
#define INVALID_P2M_ENTRY (~0UL)
-#define FOREIGN_FRAME_BIT (1UL<<31)
+#define FOREIGN_FRAME_BIT (1UL<<(BITS_PER_LONG-1))
+#define IDENTITY_FRAME_BIT (1UL<<(BITS_PER_LONG-2))
#define FOREIGN_FRAME(m) ((m) | FOREIGN_FRAME_BIT)
+#define IDENTITY_FRAME(m) ((m) | IDENTITY_FRAME_BIT)
/* Maximum amount of memory we can handle in a domain in pages */
#define MAX_DOMAIN_PAGES \
@@ -41,12 +43,18 @@ extern unsigned int machine_to_phys_order;
extern unsigned long get_phys_to_machine(unsigned long pfn);
extern bool set_phys_to_machine(unsigned long pfn, unsigned long mfn);
+extern bool __set_phys_to_machine(unsigned long pfn, unsigned long mfn);
+extern unsigned long set_phys_range_identity(unsigned long pfn_s,
+ unsigned long pfn_e);
extern int m2p_add_override(unsigned long mfn, struct page *page);
extern int m2p_remove_override(struct page *page);
extern struct page *m2p_find_override(unsigned long mfn);
extern unsigned long m2p_find_override_pfn(unsigned long mfn, unsigned long pfn);
+#ifdef CONFIG_XEN_DEBUG_FS
+extern int p2m_dump_show(struct seq_file *m, void *v);
+#endif
static inline unsigned long pfn_to_mfn(unsigned long pfn)
{
unsigned long mfn;
@@ -57,7 +65,7 @@ static inline unsigned long pfn_to_mfn(unsigned long pfn)
mfn = get_phys_to_machine(pfn);
if (mfn != INVALID_P2M_ENTRY)
- mfn &= ~FOREIGN_FRAME_BIT;
+ mfn &= ~(FOREIGN_FRAME_BIT | IDENTITY_FRAME_BIT);
return mfn;
}
@@ -73,25 +81,44 @@ static inline int phys_to_machine_mapping_valid(unsigned long pfn)
static inline unsigned long mfn_to_pfn(unsigned long mfn)
{
unsigned long pfn;
+ int ret = 0;
if (xen_feature(XENFEAT_auto_translated_physmap))
return mfn;
+ if (unlikely((mfn >> machine_to_phys_order) != 0)) {
+ pfn = ~0;
+ goto try_override;
+ }
pfn = 0;
/*
* The array access can fail (e.g., device space beyond end of RAM).
* In such cases it doesn't matter what we return (we return garbage),
* but we must handle the fault without crashing!
*/
- __get_user(pfn, &machine_to_phys_mapping[mfn]);
-
- /*
- * If this appears to be a foreign mfn (because the pfn
- * doesn't map back to the mfn), then check the local override
- * table to see if there's a better pfn to use.
+ ret = __get_user(pfn, &machine_to_phys_mapping[mfn]);
+try_override:
+ /* ret might be < 0 if there are no entries in the m2p for mfn */
+ if (ret < 0)
+ pfn = ~0;
+ else if (get_phys_to_machine(pfn) != mfn)
+ /*
+ * If this appears to be a foreign mfn (because the pfn
+ * doesn't map back to the mfn), then check the local override
+ * table to see if there's a better pfn to use.
+ *
+ * m2p_find_override_pfn returns ~0 if it doesn't find anything.
+ */
+ pfn = m2p_find_override_pfn(mfn, ~0);
+
+ /*
+ * pfn is ~0 if there are no entries in the m2p for mfn or if the
+ * entry doesn't map back to the mfn and m2p_override doesn't have a
+ * valid entry for it.
*/
- if (get_phys_to_machine(pfn) != mfn)
- pfn = m2p_find_override_pfn(mfn, pfn);
+ if (pfn == ~0 &&
+ get_phys_to_machine(mfn) == IDENTITY_FRAME(mfn))
+ pfn = mfn;
return pfn;
}
diff --git a/trunk/arch/x86/include/asm/xen/pci.h b/trunk/arch/x86/include/asm/xen/pci.h
index 2329b3eaf8d3..aa8620989162 100644
--- a/trunk/arch/x86/include/asm/xen/pci.h
+++ b/trunk/arch/x86/include/asm/xen/pci.h
@@ -27,16 +27,16 @@ static inline void __init xen_setup_pirqs(void)
* its own functions.
*/
struct xen_pci_frontend_ops {
- int (*enable_msi)(struct pci_dev *dev, int **vectors);
+ int (*enable_msi)(struct pci_dev *dev, int vectors[]);
void (*disable_msi)(struct pci_dev *dev);
- int (*enable_msix)(struct pci_dev *dev, int **vectors, int nvec);
+ int (*enable_msix)(struct pci_dev *dev, int vectors[], int nvec);
void (*disable_msix)(struct pci_dev *dev);
};
extern struct xen_pci_frontend_ops *xen_pci_frontend;
static inline int xen_pci_frontend_enable_msi(struct pci_dev *dev,
- int **vectors)
+ int vectors[])
{
if (xen_pci_frontend && xen_pci_frontend->enable_msi)
return xen_pci_frontend->enable_msi(dev, vectors);
@@ -48,7 +48,7 @@ static inline void xen_pci_frontend_disable_msi(struct pci_dev *dev)
xen_pci_frontend->disable_msi(dev);
}
static inline int xen_pci_frontend_enable_msix(struct pci_dev *dev,
- int **vectors, int nvec)
+ int vectors[], int nvec)
{
if (xen_pci_frontend && xen_pci_frontend->enable_msix)
return xen_pci_frontend->enable_msix(dev, vectors, nvec);
diff --git a/trunk/arch/x86/kernel/Makefile b/trunk/arch/x86/kernel/Makefile
index 778c5b93676d..743642f1a36c 100644
--- a/trunk/arch/x86/kernel/Makefile
+++ b/trunk/arch/x86/kernel/Makefile
@@ -67,9 +67,9 @@ obj-$(CONFIG_PCI) += early-quirks.o
apm-y := apm_32.o
obj-$(CONFIG_APM) += apm.o
obj-$(CONFIG_SMP) += smp.o
-obj-$(CONFIG_SMP) += smpboot.o tsc_sync.o
+obj-$(CONFIG_SMP) += smpboot.o
+obj-$(CONFIG_SMP) += tsc_sync.o
obj-$(CONFIG_SMP) += setup_percpu.o
-obj-$(CONFIG_X86_64_SMP) += tsc_sync.o
obj-$(CONFIG_X86_MPPARSE) += mpparse.o
obj-y += apic/
obj-$(CONFIG_X86_REBOOTFIXUPS) += reboot_fixups_32.o
@@ -109,6 +109,7 @@ obj-$(CONFIG_MICROCODE) += microcode.o
obj-$(CONFIG_X86_CHECK_BIOS_CORRUPTION) += check.o
obj-$(CONFIG_SWIOTLB) += pci-swiotlb.o
+obj-$(CONFIG_OF) += devicetree.o
###
# 64 bit specific files
diff --git a/trunk/arch/x86/kernel/acpi/boot.c b/trunk/arch/x86/kernel/acpi/boot.c
index b3a71137983a..9a966c579af5 100644
--- a/trunk/arch/x86/kernel/acpi/boot.c
+++ b/trunk/arch/x86/kernel/acpi/boot.c
@@ -72,6 +72,7 @@ u8 acpi_sci_flags __initdata;
int acpi_sci_override_gsi __initdata;
int acpi_skip_timer_override __initdata;
int acpi_use_timer_override __initdata;
+int acpi_fix_pin2_polarity __initdata;
#ifdef CONFIG_X86_LOCAL_APIC
static u64 acpi_lapic_addr __initdata = APIC_DEFAULT_PHYS_BASE;
@@ -415,10 +416,15 @@ acpi_parse_int_src_ovr(struct acpi_subtable_header * header,
return 0;
}
- if (acpi_skip_timer_override &&
- intsrc->source_irq == 0 && intsrc->global_irq == 2) {
- printk(PREFIX "BIOS IRQ0 pin2 override ignored.\n");
- return 0;
+ if (intsrc->source_irq == 0 && intsrc->global_irq == 2) {
+ if (acpi_skip_timer_override) {
+ printk(PREFIX "BIOS IRQ0 pin2 override ignored.\n");
+ return 0;
+ }
+ if (acpi_fix_pin2_polarity && (intsrc->inti_flags & ACPI_MADT_POLARITY_MASK)) {
+ intsrc->inti_flags &= ~ACPI_MADT_POLARITY_MASK;
+ printk(PREFIX "BIOS IRQ0 pin2 override: forcing polarity to high active.\n");
+ }
}
mp_override_legacy_irq(intsrc->source_irq,
@@ -589,14 +595,8 @@ static void acpi_map_cpu2node(acpi_handle handle, int cpu, int physid)
nid = acpi_get_node(handle);
if (nid == -1 || !node_online(nid))
return;
-#ifdef CONFIG_X86_64
- apicid_to_node[physid] = nid;
+ set_apicid_to_node(physid, nid);
numa_set_node(cpu, nid);
-#else /* CONFIG_X86_32 */
- apicid_2_node[physid] = nid;
- cpu_to_node_map[cpu] = nid;
-#endif
-
#endif
}
diff --git a/trunk/arch/x86/kernel/amd_nb.c b/trunk/arch/x86/kernel/amd_nb.c
index 0a99f7198bc3..ed3c2e5b714a 100644
--- a/trunk/arch/x86/kernel/amd_nb.c
+++ b/trunk/arch/x86/kernel/amd_nb.c
@@ -12,7 +12,7 @@
static u32 *flush_words;
-struct pci_device_id amd_nb_misc_ids[] = {
+const struct pci_device_id amd_nb_misc_ids[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_K8_NB_MISC) },
{ PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_10H_NB_MISC) },
{ PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_15H_NB_MISC) },
@@ -20,6 +20,11 @@ struct pci_device_id amd_nb_misc_ids[] = {
};
EXPORT_SYMBOL(amd_nb_misc_ids);
+static struct pci_device_id amd_nb_link_ids[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_15H_NB_LINK) },
+ {}
+};
+
const struct amd_nb_bus_dev_range amd_nb_bus_dev_ranges[] __initconst = {
{ 0x00, 0x18, 0x20 },
{ 0xff, 0x00, 0x20 },
@@ -31,7 +36,7 @@ struct amd_northbridge_info amd_northbridges;
EXPORT_SYMBOL(amd_northbridges);
static struct pci_dev *next_northbridge(struct pci_dev *dev,
- struct pci_device_id *ids)
+ const struct pci_device_id *ids)
{
do {
dev = pci_get_device(PCI_ANY_ID, PCI_ANY_ID, dev);
@@ -45,7 +50,7 @@ int amd_cache_northbridges(void)
{
int i = 0;
struct amd_northbridge *nb;
- struct pci_dev *misc;
+ struct pci_dev *misc, *link;
if (amd_nb_num())
return 0;
@@ -64,10 +69,12 @@ int amd_cache_northbridges(void)
amd_northbridges.nb = nb;
amd_northbridges.num = i;
- misc = NULL;
+ link = misc = NULL;
for (i = 0; i != amd_nb_num(); i++) {
node_to_amd_nb(i)->misc = misc =
next_northbridge(misc, amd_nb_misc_ids);
+ node_to_amd_nb(i)->link = link =
+ next_northbridge(link, amd_nb_link_ids);
}
/* some CPU families (e.g. family 0x11) do not support GART */
@@ -85,6 +92,13 @@ int amd_cache_northbridges(void)
boot_cpu_data.x86_mask >= 0x1))
amd_northbridges.flags |= AMD_NB_L3_INDEX_DISABLE;
+ if (boot_cpu_data.x86 == 0x15)
+ amd_northbridges.flags |= AMD_NB_L3_INDEX_DISABLE;
+
+ /* L3 cache partitioning is supported on family 0x15 */
+ if (boot_cpu_data.x86 == 0x15)
+ amd_northbridges.flags |= AMD_NB_L3_PARTITIONING;
+
return 0;
}
EXPORT_SYMBOL_GPL(amd_cache_northbridges);
@@ -93,8 +107,9 @@ EXPORT_SYMBOL_GPL(amd_cache_northbridges);
they're useless anyways */
int __init early_is_amd_nb(u32 device)
{
- struct pci_device_id *id;
+ const struct pci_device_id *id;
u32 vendor = device & 0xffff;
+
device >>= 16;
for (id = amd_nb_misc_ids; id->vendor; id++)
if (vendor == id->vendor && device == id->device)
@@ -102,6 +117,65 @@ int __init early_is_amd_nb(u32 device)
return 0;
}
+int amd_get_subcaches(int cpu)
+{
+ struct pci_dev *link = node_to_amd_nb(amd_get_nb_id(cpu))->link;
+ unsigned int mask;
+ int cuid = 0;
+
+ if (!amd_nb_has_feature(AMD_NB_L3_PARTITIONING))
+ return 0;
+
+ pci_read_config_dword(link, 0x1d4, &mask);
+
+#ifdef CONFIG_SMP
+ cuid = cpu_data(cpu).compute_unit_id;
+#endif
+ return (mask >> (4 * cuid)) & 0xf;
+}
+
+int amd_set_subcaches(int cpu, int mask)
+{
+ static unsigned int reset, ban;
+ struct amd_northbridge *nb = node_to_amd_nb(amd_get_nb_id(cpu));
+ unsigned int reg;
+ int cuid = 0;
+
+ if (!amd_nb_has_feature(AMD_NB_L3_PARTITIONING) || mask > 0xf)
+ return -EINVAL;
+
+ /* if necessary, collect reset state of L3 partitioning and BAN mode */
+ if (reset == 0) {
+ pci_read_config_dword(nb->link, 0x1d4, &reset);
+ pci_read_config_dword(nb->misc, 0x1b8, &ban);
+ ban &= 0x180000;
+ }
+
+ /* deactivate BAN mode if any subcaches are to be disabled */
+ if (mask != 0xf) {
+ pci_read_config_dword(nb->misc, 0x1b8, ®);
+ pci_write_config_dword(nb->misc, 0x1b8, reg & ~0x180000);
+ }
+
+#ifdef CONFIG_SMP
+ cuid = cpu_data(cpu).compute_unit_id;
+#endif
+ mask <<= 4 * cuid;
+ mask |= (0xf ^ (1 << cuid)) << 26;
+
+ pci_write_config_dword(nb->link, 0x1d4, mask);
+
+ /* reset BAN mode if L3 partitioning returned to reset state */
+ pci_read_config_dword(nb->link, 0x1d4, ®);
+ if (reg == reset) {
+ pci_read_config_dword(nb->misc, 0x1b8, ®);
+ reg &= ~0x180000;
+ pci_write_config_dword(nb->misc, 0x1b8, reg | ban);
+ }
+
+ return 0;
+}
+
int amd_cache_gart(void)
{
int i;
diff --git a/trunk/arch/x86/kernel/apb_timer.c b/trunk/arch/x86/kernel/apb_timer.c
index 51ef31a89be9..1293c709ee85 100644
--- a/trunk/arch/x86/kernel/apb_timer.c
+++ b/trunk/arch/x86/kernel/apb_timer.c
@@ -284,7 +284,7 @@ static int __init apbt_clockevent_register(void)
memcpy(&adev->evt, &apbt_clockevent, sizeof(struct clock_event_device));
if (mrst_timer_options == MRST_TIMER_LAPIC_APBT) {
- apbt_clockevent.rating = APBT_CLOCKEVENT_RATING - 100;
+ adev->evt.rating = APBT_CLOCKEVENT_RATING - 100;
global_clock_event = &adev->evt;
printk(KERN_DEBUG "%s clockevent registered as global\n",
global_clock_event->name);
@@ -508,64 +508,12 @@ static int apbt_next_event(unsigned long delta,
return 0;
}
-/*
- * APB timer clock is not in sync with pclk on Langwell, which translates to
- * unreliable read value caused by sampling error. the error does not add up
- * overtime and only happens when sampling a 0 as a 1 by mistake. so the time
- * would go backwards. the following code is trying to prevent time traveling
- * backwards. little bit paranoid.
- */
static cycle_t apbt_read_clocksource(struct clocksource *cs)
{
- unsigned long t0, t1, t2;
- static unsigned long last_read;
-
-bad_count:
- t1 = apbt_readl(phy_cs_timer_id,
- APBTMR_N_CURRENT_VALUE);
- t2 = apbt_readl(phy_cs_timer_id,
- APBTMR_N_CURRENT_VALUE);
- if (unlikely(t1 < t2)) {
- pr_debug("APBT: read current count error %lx:%lx:%lx\n",
- t1, t2, t2 - t1);
- goto bad_count;
- }
- /*
- * check against cached last read, makes sure time does not go back.
- * it could be a normal rollover but we will do tripple check anyway
- */
- if (unlikely(t2 > last_read)) {
- /* check if we have a normal rollover */
- unsigned long raw_intr_status =
- apbt_readl_reg(APBTMRS_RAW_INT_STATUS);
- /*
- * cs timer interrupt is masked but raw intr bit is set if
- * rollover occurs. then we read EOI reg to clear it.
- */
- if (raw_intr_status & (1 << phy_cs_timer_id)) {
- apbt_readl(phy_cs_timer_id, APBTMR_N_EOI);
- goto out;
- }
- pr_debug("APB CS going back %lx:%lx:%lx ",
- t2, last_read, t2 - last_read);
-bad_count_x3:
- pr_debug("triple check enforced\n");
- t0 = apbt_readl(phy_cs_timer_id,
- APBTMR_N_CURRENT_VALUE);
- udelay(1);
- t1 = apbt_readl(phy_cs_timer_id,
- APBTMR_N_CURRENT_VALUE);
- udelay(1);
- t2 = apbt_readl(phy_cs_timer_id,
- APBTMR_N_CURRENT_VALUE);
- if ((t2 > t1) || (t1 > t0)) {
- printk(KERN_ERR "Error: APB CS tripple check failed\n");
- goto bad_count_x3;
- }
- }
-out:
- last_read = t2;
- return (cycle_t)~t2;
+ unsigned long current_count;
+
+ current_count = apbt_readl(phy_cs_timer_id, APBTMR_N_CURRENT_VALUE);
+ return (cycle_t)~current_count;
}
static int apbt_clocksource_register(void)
diff --git a/trunk/arch/x86/kernel/aperture_64.c b/trunk/arch/x86/kernel/aperture_64.c
index 5955a7800a96..7b1e8e10b89c 100644
--- a/trunk/arch/x86/kernel/aperture_64.c
+++ b/trunk/arch/x86/kernel/aperture_64.c
@@ -13,7 +13,7 @@
#include
#include
#include
-#include
+#include
#include
#include
#include
@@ -57,7 +57,7 @@ static void __init insert_aperture_resource(u32 aper_base, u32 aper_size)
static u32 __init allocate_aperture(void)
{
u32 aper_size;
- void *p;
+ unsigned long addr;
/* aper_size should <= 1G */
if (fallback_aper_order > 5)
@@ -83,27 +83,26 @@ static u32 __init allocate_aperture(void)
* so don't use 512M below as gart iommu, leave the space for kernel
* code for safe
*/
- p = __alloc_bootmem_nopanic(aper_size, aper_size, 512ULL<<20);
+ addr = memblock_find_in_range(0, 1ULL<<32, aper_size, 512ULL<<20);
+ if (addr == MEMBLOCK_ERROR || addr + aper_size > 0xffffffff) {
+ printk(KERN_ERR
+ "Cannot allocate aperture memory hole (%lx,%uK)\n",
+ addr, aper_size>>10);
+ return 0;
+ }
+ memblock_x86_reserve_range(addr, addr + aper_size, "aperture64");
/*
* Kmemleak should not scan this block as it may not be mapped via the
* kernel direct mapping.
*/
- kmemleak_ignore(p);
- if (!p || __pa(p)+aper_size > 0xffffffff) {
- printk(KERN_ERR
- "Cannot allocate aperture memory hole (%p,%uK)\n",
- p, aper_size>>10);
- if (p)
- free_bootmem(__pa(p), aper_size);
- return 0;
- }
+ kmemleak_ignore(phys_to_virt(addr));
printk(KERN_INFO "Mapping aperture over %d KB of RAM @ %lx\n",
- aper_size >> 10, __pa(p));
- insert_aperture_resource((u32)__pa(p), aper_size);
- register_nosave_region((u32)__pa(p) >> PAGE_SHIFT,
- (u32)__pa(p+aper_size) >> PAGE_SHIFT);
+ aper_size >> 10, addr);
+ insert_aperture_resource((u32)addr, aper_size);
+ register_nosave_region(addr >> PAGE_SHIFT,
+ (addr+aper_size) >> PAGE_SHIFT);
- return (u32)__pa(p);
+ return (u32)addr;
}
diff --git a/trunk/arch/x86/kernel/apic/apic.c b/trunk/arch/x86/kernel/apic/apic.c
index 76b96d74978a..966673f44141 100644
--- a/trunk/arch/x86/kernel/apic/apic.c
+++ b/trunk/arch/x86/kernel/apic/apic.c
@@ -43,6 +43,7 @@
#include
#include
#include
+#include
#include
#include
#include
@@ -78,12 +79,21 @@ EXPORT_EARLY_PER_CPU_SYMBOL(x86_cpu_to_apicid);
EXPORT_EARLY_PER_CPU_SYMBOL(x86_bios_cpu_apicid);
#ifdef CONFIG_X86_32
+
+/*
+ * On x86_32, the mapping between cpu and logical apicid may vary
+ * depending on apic in use. The following early percpu variable is
+ * used for the mapping. This is where the behaviors of x86_64 and 32
+ * actually diverge. Let's keep it ugly for now.
+ */
+DEFINE_EARLY_PER_CPU(int, x86_cpu_to_logical_apicid, BAD_APICID);
+
/*
* Knob to control our willingness to enable the local APIC.
*
* +1=force-enable
*/
-static int force_enable_local_apic;
+static int force_enable_local_apic __initdata;
/*
* APIC command line parameters
*/
@@ -153,7 +163,7 @@ early_param("nox2apic", setup_nox2apic);
unsigned long mp_lapic_addr;
int disable_apic;
/* Disable local APIC timer from the kernel commandline or via dmi quirk */
-static int disable_apic_timer __cpuinitdata;
+static int disable_apic_timer __initdata;
/* Local APIC timer works in C2 */
int local_apic_timer_c2_ok;
EXPORT_SYMBOL_GPL(local_apic_timer_c2_ok);
@@ -177,29 +187,8 @@ static struct resource lapic_resource = {
static unsigned int calibration_result;
-static int lapic_next_event(unsigned long delta,
- struct clock_event_device *evt);
-static void lapic_timer_setup(enum clock_event_mode mode,
- struct clock_event_device *evt);
-static void lapic_timer_broadcast(const struct cpumask *mask);
static void apic_pm_activate(void);
-/*
- * The local apic timer can be used for any function which is CPU local.
- */
-static struct clock_event_device lapic_clockevent = {
- .name = "lapic",
- .features = CLOCK_EVT_FEAT_PERIODIC | CLOCK_EVT_FEAT_ONESHOT
- | CLOCK_EVT_FEAT_C3STOP | CLOCK_EVT_FEAT_DUMMY,
- .shift = 32,
- .set_mode = lapic_timer_setup,
- .set_next_event = lapic_next_event,
- .broadcast = lapic_timer_broadcast,
- .rating = 100,
- .irq = -1,
-};
-static DEFINE_PER_CPU(struct clock_event_device, lapic_events);
-
static unsigned long apic_phys;
/*
@@ -238,7 +227,7 @@ static int modern_apic(void)
* right after this call apic become NOOP driven
* so apic->write/read doesn't do anything
*/
-void apic_disable(void)
+static void __init apic_disable(void)
{
pr_info("APIC: switched to apic NOOP\n");
apic = &apic_noop;
@@ -282,23 +271,6 @@ u64 native_apic_icr_read(void)
return icr1 | ((u64)icr2 << 32);
}
-/**
- * enable_NMI_through_LVT0 - enable NMI through local vector table 0
- */
-void __cpuinit enable_NMI_through_LVT0(void)
-{
- unsigned int v;
-
- /* unmask and set to NMI */
- v = APIC_DM_NMI;
-
- /* Level triggered for 82489DX (32bit mode) */
- if (!lapic_is_integrated())
- v |= APIC_LVT_LEVEL_TRIGGER;
-
- apic_write(APIC_LVT0, v);
-}
-
#ifdef CONFIG_X86_32
/**
* get_physical_broadcast - Get number of physical broadcast IDs
@@ -508,6 +480,23 @@ static void lapic_timer_broadcast(const struct cpumask *mask)
#endif
}
+
+/*
+ * The local apic timer can be used for any function which is CPU local.
+ */
+static struct clock_event_device lapic_clockevent = {
+ .name = "lapic",
+ .features = CLOCK_EVT_FEAT_PERIODIC | CLOCK_EVT_FEAT_ONESHOT
+ | CLOCK_EVT_FEAT_C3STOP | CLOCK_EVT_FEAT_DUMMY,
+ .shift = 32,
+ .set_mode = lapic_timer_setup,
+ .set_next_event = lapic_next_event,
+ .broadcast = lapic_timer_broadcast,
+ .rating = 100,
+ .irq = -1,
+};
+static DEFINE_PER_CPU(struct clock_event_device, lapic_events);
+
/*
* Setup the local APIC timer for this CPU. Copy the initialized values
* of the boot CPU and register the clock event in the framework.
@@ -1209,7 +1198,7 @@ void __cpuinit setup_local_APIC(void)
rdtscll(tsc);
if (disable_apic) {
- arch_disable_smp_support();
+ disable_ioapic_support();
return;
}
@@ -1237,6 +1226,19 @@ void __cpuinit setup_local_APIC(void)
*/
apic->init_apic_ldr();
+#ifdef CONFIG_X86_32
+ /*
+ * APIC LDR is initialized. If logical_apicid mapping was
+ * initialized during get_smp_config(), make sure it matches the
+ * actual value.
+ */
+ i = early_per_cpu(x86_cpu_to_logical_apicid, cpu);
+ WARN_ON(i != BAD_APICID && i != logical_smp_processor_id());
+ /* always use the value from LDR */
+ early_per_cpu(x86_cpu_to_logical_apicid, cpu) =
+ logical_smp_processor_id();
+#endif
+
/*
* Set Task Priority to 'accept all'. We never change this
* later on.
@@ -1448,7 +1450,7 @@ int __init enable_IR(void)
void __init enable_IR_x2apic(void)
{
unsigned long flags;
- struct IO_APIC_route_entry **ioapic_entries = NULL;
+ struct IO_APIC_route_entry **ioapic_entries;
int ret, x2apic_enabled = 0;
int dmar_table_init_ret;
@@ -1537,7 +1539,7 @@ static int __init detect_init_APIC(void)
}
#else
-static int apic_verify(void)
+static int __init apic_verify(void)
{
u32 features, h, l;
@@ -1562,7 +1564,7 @@ static int apic_verify(void)
return 0;
}
-int apic_force_enable(void)
+int __init apic_force_enable(unsigned long addr)
{
u32 h, l;
@@ -1578,7 +1580,7 @@ int apic_force_enable(void)
if (!(l & MSR_IA32_APICBASE_ENABLE)) {
pr_info("Local APIC disabled by BIOS -- reenabling.\n");
l &= ~MSR_IA32_APICBASE_BASE;
- l |= MSR_IA32_APICBASE_ENABLE | APIC_DEFAULT_PHYS_BASE;
+ l |= MSR_IA32_APICBASE_ENABLE | addr;
wrmsr(MSR_IA32_APICBASE, l, h);
enabled_via_apicbase = 1;
}
@@ -1619,7 +1621,7 @@ static int __init detect_init_APIC(void)
"you can enable it with \"lapic\"\n");
return -1;
}
- if (apic_force_enable())
+ if (apic_force_enable(APIC_DEFAULT_PHYS_BASE))
return -1;
} else {
if (apic_verify())
@@ -1930,17 +1932,6 @@ void __cpuinit generic_processor_info(int apicid, int version)
{
int cpu;
- /*
- * Validate version
- */
- if (version == 0x0) {
- pr_warning("BIOS bug, APIC version is 0 for CPU#%d! "
- "fixing up to 0x10. (tell your hw vendor)\n",
- version);
- version = 0x10;
- }
- apic_version[apicid] = version;
-
if (num_processors >= nr_cpu_ids) {
int max = nr_cpu_ids;
int thiscpu = max + disabled_cpus;
@@ -1954,22 +1945,34 @@ void __cpuinit generic_processor_info(int apicid, int version)
}
num_processors++;
- cpu = cpumask_next_zero(-1, cpu_present_mask);
-
- if (version != apic_version[boot_cpu_physical_apicid])
- WARN_ONCE(1,
- "ACPI: apic version mismatch, bootcpu: %x cpu %d: %x\n",
- apic_version[boot_cpu_physical_apicid], cpu, version);
-
- physid_set(apicid, phys_cpu_present_map);
if (apicid == boot_cpu_physical_apicid) {
/*
* x86_bios_cpu_apicid is required to have processors listed
* in same order as logical cpu numbers. Hence the first
* entry is BSP, and so on.
+ * boot_cpu_init() already hold bit 0 in cpu_present_mask
+ * for BSP.
*/
cpu = 0;
+ } else
+ cpu = cpumask_next_zero(-1, cpu_present_mask);
+
+ /*
+ * Validate version
+ */
+ if (version == 0x0) {
+ pr_warning("BIOS bug: APIC version is 0 for CPU %d/0x%x, fixing up to 0x10\n",
+ cpu, apicid);
+ version = 0x10;
}
+ apic_version[apicid] = version;
+
+ if (version != apic_version[boot_cpu_physical_apicid]) {
+ pr_warning("BIOS bug: APIC version mismatch, boot CPU: %x, CPU %d: version %x\n",
+ apic_version[boot_cpu_physical_apicid], cpu, version);
+ }
+
+ physid_set(apicid, phys_cpu_present_map);
if (apicid > max_physical_apicid)
max_physical_apicid = apicid;
@@ -1977,7 +1980,10 @@ void __cpuinit generic_processor_info(int apicid, int version)
early_per_cpu(x86_cpu_to_apicid, cpu) = apicid;
early_per_cpu(x86_bios_cpu_apicid, cpu) = apicid;
#endif
-
+#ifdef CONFIG_X86_32
+ early_per_cpu(x86_cpu_to_logical_apicid, cpu) =
+ apic->x86_32_early_logical_apicid(cpu);
+#endif
set_cpu_possible(cpu, true);
set_cpu_present(cpu, true);
}
@@ -1998,10 +2004,14 @@ void default_init_apic_ldr(void)
}
#ifdef CONFIG_X86_32
-int default_apicid_to_node(int logical_apicid)
+int default_x86_32_numa_cpu_node(int cpu)
{
-#ifdef CONFIG_SMP
- return apicid_2_node[hard_smp_processor_id()];
+#ifdef CONFIG_NUMA
+ int apicid = early_per_cpu(x86_cpu_to_apicid, cpu);
+
+ if (apicid != BAD_APICID)
+ return __apicid_to_node[apicid];
+ return NUMA_NO_NODE;
#else
return 0;
#endif
diff --git a/trunk/arch/x86/kernel/apic/apic_flat_64.c b/trunk/arch/x86/kernel/apic/apic_flat_64.c
index 09d3b17ce0c2..5652d31fe108 100644
--- a/trunk/arch/x86/kernel/apic/apic_flat_64.c
+++ b/trunk/arch/x86/kernel/apic/apic_flat_64.c
@@ -185,8 +185,6 @@ struct apic apic_flat = {
.ioapic_phys_id_map = NULL,
.setup_apic_routing = NULL,
.multi_timer_check = NULL,
- .apicid_to_node = NULL,
- .cpu_to_logical_apicid = NULL,
.cpu_present_to_apicid = default_cpu_present_to_apicid,
.apicid_to_cpu_present = NULL,
.setup_portio_remap = NULL,
@@ -337,8 +335,6 @@ struct apic apic_physflat = {
.ioapic_phys_id_map = NULL,
.setup_apic_routing = NULL,
.multi_timer_check = NULL,
- .apicid_to_node = NULL,
- .cpu_to_logical_apicid = NULL,
.cpu_present_to_apicid = default_cpu_present_to_apicid,
.apicid_to_cpu_present = NULL,
.setup_portio_remap = NULL,
diff --git a/trunk/arch/x86/kernel/apic/apic_noop.c b/trunk/arch/x86/kernel/apic/apic_noop.c
index e31b9ffe25f5..f1baa2dc087a 100644
--- a/trunk/arch/x86/kernel/apic/apic_noop.c
+++ b/trunk/arch/x86/kernel/apic/apic_noop.c
@@ -54,11 +54,6 @@ static u64 noop_apic_icr_read(void)
return 0;
}
-static int noop_cpu_to_logical_apicid(int cpu)
-{
- return 0;
-}
-
static int noop_phys_pkg_id(int cpuid_apic, int index_msb)
{
return 0;
@@ -113,12 +108,6 @@ static void noop_vector_allocation_domain(int cpu, struct cpumask *retmask)
cpumask_set_cpu(cpu, retmask);
}
-int noop_apicid_to_node(int logical_apicid)
-{
- /* we're always on node 0 */
- return 0;
-}
-
static u32 noop_apic_read(u32 reg)
{
WARN_ON_ONCE((cpu_has_apic && !disable_apic));
@@ -130,6 +119,14 @@ static void noop_apic_write(u32 reg, u32 v)
WARN_ON_ONCE(cpu_has_apic && !disable_apic);
}
+#ifdef CONFIG_X86_32
+static int noop_x86_32_numa_cpu_node(int cpu)
+{
+ /* we're always on node 0 */
+ return 0;
+}
+#endif
+
struct apic apic_noop = {
.name = "noop",
.probe = noop_probe,
@@ -153,9 +150,7 @@ struct apic apic_noop = {
.ioapic_phys_id_map = default_ioapic_phys_id_map,
.setup_apic_routing = NULL,
.multi_timer_check = NULL,
- .apicid_to_node = noop_apicid_to_node,
- .cpu_to_logical_apicid = noop_cpu_to_logical_apicid,
.cpu_present_to_apicid = default_cpu_present_to_apicid,
.apicid_to_cpu_present = physid_set_mask_of_physid,
@@ -197,4 +192,9 @@ struct apic apic_noop = {
.icr_write = noop_apic_icr_write,
.wait_icr_idle = noop_apic_wait_icr_idle,
.safe_wait_icr_idle = noop_safe_apic_wait_icr_idle,
+
+#ifdef CONFIG_X86_32
+ .x86_32_early_logical_apicid = noop_x86_32_early_logical_apicid,
+ .x86_32_numa_cpu_node = noop_x86_32_numa_cpu_node,
+#endif
};
diff --git a/trunk/arch/x86/kernel/apic/bigsmp_32.c b/trunk/arch/x86/kernel/apic/bigsmp_32.c
index cb804c5091b9..541a2e431659 100644
--- a/trunk/arch/x86/kernel/apic/bigsmp_32.c
+++ b/trunk/arch/x86/kernel/apic/bigsmp_32.c
@@ -45,6 +45,12 @@ static unsigned long bigsmp_check_apicid_present(int bit)
return 1;
}
+static int bigsmp_early_logical_apicid(int cpu)
+{
+ /* on bigsmp, logical apicid is the same as physical */
+ return early_per_cpu(x86_cpu_to_apicid, cpu);
+}
+
static inline unsigned long calculate_ldr(int cpu)
{
unsigned long val, id;
@@ -80,11 +86,6 @@ static void bigsmp_setup_apic_routing(void)
nr_ioapics);
}
-static int bigsmp_apicid_to_node(int logical_apicid)
-{
- return apicid_2_node[hard_smp_processor_id()];
-}
-
static int bigsmp_cpu_present_to_apicid(int mps_cpu)
{
if (mps_cpu < nr_cpu_ids)
@@ -93,14 +94,6 @@ static int bigsmp_cpu_present_to_apicid(int mps_cpu)
return BAD_APICID;
}
-/* Mapping from cpu number to logical apicid */
-static inline int bigsmp_cpu_to_logical_apicid(int cpu)
-{
- if (cpu >= nr_cpu_ids)
- return BAD_APICID;
- return cpu_physical_id(cpu);
-}
-
static void bigsmp_ioapic_phys_id_map(physid_mask_t *phys_map, physid_mask_t *retmap)
{
/* For clustered we don't have a good way to do this yet - hack */
@@ -115,7 +108,11 @@ static int bigsmp_check_phys_apicid_present(int phys_apicid)
/* As we are using single CPU as destination, pick only one CPU here */
static unsigned int bigsmp_cpu_mask_to_apicid(const struct cpumask *cpumask)
{
- return bigsmp_cpu_to_logical_apicid(cpumask_first(cpumask));
+ int cpu = cpumask_first(cpumask);
+
+ if (cpu < nr_cpu_ids)
+ return cpu_physical_id(cpu);
+ return BAD_APICID;
}
static unsigned int bigsmp_cpu_mask_to_apicid_and(const struct cpumask *cpumask,
@@ -129,9 +126,9 @@ static unsigned int bigsmp_cpu_mask_to_apicid_and(const struct cpumask *cpumask,
*/
for_each_cpu_and(cpu, cpumask, andmask) {
if (cpumask_test_cpu(cpu, cpu_online_mask))
- break;
+ return cpu_physical_id(cpu);
}
- return bigsmp_cpu_to_logical_apicid(cpu);
+ return BAD_APICID;
}
static int bigsmp_phys_pkg_id(int cpuid_apic, int index_msb)
@@ -219,8 +216,6 @@ struct apic apic_bigsmp = {
.ioapic_phys_id_map = bigsmp_ioapic_phys_id_map,
.setup_apic_routing = bigsmp_setup_apic_routing,
.multi_timer_check = NULL,
- .apicid_to_node = bigsmp_apicid_to_node,
- .cpu_to_logical_apicid = bigsmp_cpu_to_logical_apicid,
.cpu_present_to_apicid = bigsmp_cpu_present_to_apicid,
.apicid_to_cpu_present = physid_set_mask_of_physid,
.setup_portio_remap = NULL,
@@ -256,4 +251,7 @@ struct apic apic_bigsmp = {
.icr_write = native_apic_icr_write,
.wait_icr_idle = native_apic_wait_icr_idle,
.safe_wait_icr_idle = native_safe_apic_wait_icr_idle,
+
+ .x86_32_early_logical_apicid = bigsmp_early_logical_apicid,
+ .x86_32_numa_cpu_node = default_x86_32_numa_cpu_node,
};
diff --git a/trunk/arch/x86/kernel/apic/es7000_32.c b/trunk/arch/x86/kernel/apic/es7000_32.c
index 8593582d8022..3e9de4854c5b 100644
--- a/trunk/arch/x86/kernel/apic/es7000_32.c
+++ b/trunk/arch/x86/kernel/apic/es7000_32.c
@@ -460,6 +460,12 @@ static unsigned long es7000_check_apicid_present(int bit)
return physid_isset(bit, phys_cpu_present_map);
}
+static int es7000_early_logical_apicid(int cpu)
+{
+ /* on es7000, logical apicid is the same as physical */
+ return early_per_cpu(x86_bios_cpu_apicid, cpu);
+}
+
static unsigned long calculate_ldr(int cpu)
{
unsigned long id = per_cpu(x86_bios_cpu_apicid, cpu);
@@ -504,12 +510,11 @@ static void es7000_setup_apic_routing(void)
nr_ioapics, cpumask_bits(es7000_target_cpus())[0]);
}
-static int es7000_apicid_to_node(int logical_apicid)
+static int es7000_numa_cpu_node(int cpu)
{
return 0;
}
-
static int es7000_cpu_present_to_apicid(int mps_cpu)
{
if (!mps_cpu)
@@ -528,18 +533,6 @@ static void es7000_apicid_to_cpu_present(int phys_apicid, physid_mask_t *retmap)
++cpu_id;
}
-/* Mapping from cpu number to logical apicid */
-static int es7000_cpu_to_logical_apicid(int cpu)
-{
-#ifdef CONFIG_SMP
- if (cpu >= nr_cpu_ids)
- return BAD_APICID;
- return cpu_2_logical_apicid[cpu];
-#else
- return logical_smp_processor_id();
-#endif
-}
-
static void es7000_ioapic_phys_id_map(physid_mask_t *phys_map, physid_mask_t *retmap)
{
/* For clustered we don't have a good way to do this yet - hack */
@@ -561,7 +554,7 @@ static unsigned int es7000_cpu_mask_to_apicid(const struct cpumask *cpumask)
* The cpus in the mask must all be on the apic cluster.
*/
for_each_cpu(cpu, cpumask) {
- int new_apicid = es7000_cpu_to_logical_apicid(cpu);
+ int new_apicid = early_per_cpu(x86_cpu_to_logical_apicid, cpu);
if (round && APIC_CLUSTER(apicid) != APIC_CLUSTER(new_apicid)) {
WARN(1, "Not a valid mask!");
@@ -578,7 +571,7 @@ static unsigned int
es7000_cpu_mask_to_apicid_and(const struct cpumask *inmask,
const struct cpumask *andmask)
{
- int apicid = es7000_cpu_to_logical_apicid(0);
+ int apicid = early_per_cpu(x86_cpu_to_logical_apicid, 0);
cpumask_var_t cpumask;
if (!alloc_cpumask_var(&cpumask, GFP_ATOMIC))
@@ -655,8 +648,6 @@ struct apic __refdata apic_es7000_cluster = {
.ioapic_phys_id_map = es7000_ioapic_phys_id_map,
.setup_apic_routing = es7000_setup_apic_routing,
.multi_timer_check = NULL,
- .apicid_to_node = es7000_apicid_to_node,
- .cpu_to_logical_apicid = es7000_cpu_to_logical_apicid,
.cpu_present_to_apicid = es7000_cpu_present_to_apicid,
.apicid_to_cpu_present = es7000_apicid_to_cpu_present,
.setup_portio_remap = NULL,
@@ -695,6 +686,9 @@ struct apic __refdata apic_es7000_cluster = {
.icr_write = native_apic_icr_write,
.wait_icr_idle = native_apic_wait_icr_idle,
.safe_wait_icr_idle = native_safe_apic_wait_icr_idle,
+
+ .x86_32_early_logical_apicid = es7000_early_logical_apicid,
+ .x86_32_numa_cpu_node = es7000_numa_cpu_node,
};
struct apic __refdata apic_es7000 = {
@@ -720,8 +714,6 @@ struct apic __refdata apic_es7000 = {
.ioapic_phys_id_map = es7000_ioapic_phys_id_map,
.setup_apic_routing = es7000_setup_apic_routing,
.multi_timer_check = NULL,
- .apicid_to_node = es7000_apicid_to_node,
- .cpu_to_logical_apicid = es7000_cpu_to_logical_apicid,
.cpu_present_to_apicid = es7000_cpu_present_to_apicid,
.apicid_to_cpu_present = es7000_apicid_to_cpu_present,
.setup_portio_remap = NULL,
@@ -758,4 +750,7 @@ struct apic __refdata apic_es7000 = {
.icr_write = native_apic_icr_write,
.wait_icr_idle = native_apic_wait_icr_idle,
.safe_wait_icr_idle = native_safe_apic_wait_icr_idle,
+
+ .x86_32_early_logical_apicid = es7000_early_logical_apicid,
+ .x86_32_numa_cpu_node = es7000_numa_cpu_node,
};
diff --git a/trunk/arch/x86/kernel/apic/hw_nmi.c b/trunk/arch/x86/kernel/apic/hw_nmi.c
index 79fd43ca6f96..c4e557a1ebb6 100644
--- a/trunk/arch/x86/kernel/apic/hw_nmi.c
+++ b/trunk/arch/x86/kernel/apic/hw_nmi.c
@@ -83,7 +83,6 @@ arch_trigger_all_cpu_backtrace_handler(struct notifier_block *self,
arch_spin_lock(&lock);
printk(KERN_WARNING "NMI backtrace for cpu %d\n", cpu);
show_regs(regs);
- dump_stack();
arch_spin_unlock(&lock);
cpumask_clear_cpu(cpu, to_cpumask(backtrace_mask));
return NOTIFY_STOP;
diff --git a/trunk/arch/x86/kernel/apic/io_apic.c b/trunk/arch/x86/kernel/apic/io_apic.c
index ca9e2a3545a9..4b5ebd26f565 100644
--- a/trunk/arch/x86/kernel/apic/io_apic.c
+++ b/trunk/arch/x86/kernel/apic/io_apic.c
@@ -108,7 +108,10 @@ DECLARE_BITMAP(mp_bus_not_pci, MAX_MP_BUSSES);
int skip_ioapic_setup;
-void arch_disable_smp_support(void)
+/**
+ * disable_ioapic_support() - disables ioapic support at runtime
+ */
+void disable_ioapic_support(void)
{
#ifdef CONFIG_PCI
noioapicquirk = 1;
@@ -120,11 +123,14 @@ void arch_disable_smp_support(void)
static int __init parse_noapic(char *str)
{
/* disable IO-APIC */
- arch_disable_smp_support();
+ disable_ioapic_support();
return 0;
}
early_param("noapic", parse_noapic);
+static int io_apic_setup_irq_pin_once(unsigned int irq, int node,
+ struct io_apic_irq_attr *attr);
+
/* Will be called in mpparse/acpi/sfi codes for saving IRQ info */
void mp_save_irq(struct mpc_intsrc *m)
{
@@ -181,7 +187,7 @@ int __init arch_early_irq_init(void)
irq_reserve_irqs(0, legacy_pic->nr_legacy_irqs);
for (i = 0; i < count; i++) {
- set_irq_chip_data(i, &cfg[i]);
+ irq_set_chip_data(i, &cfg[i]);
zalloc_cpumask_var_node(&cfg[i].domain, GFP_KERNEL, node);
zalloc_cpumask_var_node(&cfg[i].old_domain, GFP_KERNEL, node);
/*
@@ -200,7 +206,7 @@ int __init arch_early_irq_init(void)
#ifdef CONFIG_SPARSE_IRQ
static struct irq_cfg *irq_cfg(unsigned int irq)
{
- return get_irq_chip_data(irq);
+ return irq_get_chip_data(irq);
}
static struct irq_cfg *alloc_irq_cfg(unsigned int irq, int node)
@@ -226,7 +232,7 @@ static void free_irq_cfg(unsigned int at, struct irq_cfg *cfg)
{
if (!cfg)
return;
- set_irq_chip_data(at, NULL);
+ irq_set_chip_data(at, NULL);
free_cpumask_var(cfg->domain);
free_cpumask_var(cfg->old_domain);
kfree(cfg);
@@ -256,14 +262,14 @@ static struct irq_cfg *alloc_irq_and_cfg_at(unsigned int at, int node)
if (res < 0) {
if (res != -EEXIST)
return NULL;
- cfg = get_irq_chip_data(at);
+ cfg = irq_get_chip_data(at);
if (cfg)
return cfg;
}
cfg = alloc_irq_cfg(at, node);
if (cfg)
- set_irq_chip_data(at, cfg);
+ irq_set_chip_data(at, cfg);
else
irq_free_desc(at);
return cfg;
@@ -818,7 +824,7 @@ static int EISA_ELCR(unsigned int irq)
#define default_MCA_trigger(idx) (1)
#define default_MCA_polarity(idx) default_ISA_polarity(idx)
-static int MPBIOS_polarity(int idx)
+static int irq_polarity(int idx)
{
int bus = mp_irqs[idx].srcbus;
int polarity;
@@ -860,7 +866,7 @@ static int MPBIOS_polarity(int idx)
return polarity;
}
-static int MPBIOS_trigger(int idx)
+static int irq_trigger(int idx)
{
int bus = mp_irqs[idx].srcbus;
int trigger;
@@ -932,16 +938,6 @@ static int MPBIOS_trigger(int idx)
return trigger;
}
-static inline int irq_polarity(int idx)
-{
- return MPBIOS_polarity(idx);
-}
-
-static inline int irq_trigger(int idx)
-{
- return MPBIOS_trigger(idx);
-}
-
static int pin_2_irq(int idx, int apic, int pin)
{
int irq;
@@ -1189,7 +1185,7 @@ void __setup_vector_irq(int cpu)
raw_spin_lock(&vector_lock);
/* Mark the inuse vectors */
for_each_active_irq(irq) {
- cfg = get_irq_chip_data(irq);
+ cfg = irq_get_chip_data(irq);
if (!cfg)
continue;
/*
@@ -1220,10 +1216,6 @@ void __setup_vector_irq(int cpu)
static struct irq_chip ioapic_chip;
static struct irq_chip ir_ioapic_chip;
-#define IOAPIC_AUTO -1
-#define IOAPIC_EDGE 0
-#define IOAPIC_LEVEL 1
-
#ifdef CONFIG_X86_32
static inline int IO_APIC_irq_trigger(int irq)
{
@@ -1248,35 +1240,31 @@ static inline int IO_APIC_irq_trigger(int irq)
}
#endif
-static void ioapic_register_intr(unsigned int irq, unsigned long trigger)
+static void ioapic_register_intr(unsigned int irq, struct irq_cfg *cfg,
+ unsigned long trigger)
{
+ struct irq_chip *chip = &ioapic_chip;
+ irq_flow_handler_t hdl;
+ bool fasteoi;
if ((trigger == IOAPIC_AUTO && IO_APIC_irq_trigger(irq)) ||
- trigger == IOAPIC_LEVEL)
+ trigger == IOAPIC_LEVEL) {
irq_set_status_flags(irq, IRQ_LEVEL);
- else
+ fasteoi = true;
+ } else {
irq_clear_status_flags(irq, IRQ_LEVEL);
+ fasteoi = false;
+ }
- if (irq_remapped(get_irq_chip_data(irq))) {
+ if (irq_remapped(cfg)) {
irq_set_status_flags(irq, IRQ_MOVE_PCNTXT);
- if (trigger)
- set_irq_chip_and_handler_name(irq, &ir_ioapic_chip,
- handle_fasteoi_irq,
- "fasteoi");
- else
- set_irq_chip_and_handler_name(irq, &ir_ioapic_chip,
- handle_edge_irq, "edge");
- return;
+ chip = &ir_ioapic_chip;
+ fasteoi = trigger != 0;
}
- if ((trigger == IOAPIC_AUTO && IO_APIC_irq_trigger(irq)) ||
- trigger == IOAPIC_LEVEL)
- set_irq_chip_and_handler_name(irq, &ioapic_chip,
- handle_fasteoi_irq,
- "fasteoi");
- else
- set_irq_chip_and_handler_name(irq, &ioapic_chip,
- handle_edge_irq, "edge");
+ hdl = fasteoi ? handle_fasteoi_irq : handle_edge_irq;
+ irq_set_chip_and_handler_name(irq, chip, hdl,
+ fasteoi ? "fasteoi" : "edge");
}
static int setup_ioapic_entry(int apic_id, int irq,
@@ -1374,7 +1362,7 @@ static void setup_ioapic_irq(int apic_id, int pin, unsigned int irq,
return;
}
- ioapic_register_intr(irq, trigger);
+ ioapic_register_intr(irq, cfg, trigger);
if (irq < legacy_pic->nr_legacy_irqs)
legacy_pic->mask(irq);
@@ -1385,33 +1373,26 @@ static struct {
DECLARE_BITMAP(pin_programmed, MP_MAX_IOAPIC_PIN + 1);
} mp_ioapic_routing[MAX_IO_APICS];
-static void __init setup_IO_APIC_irqs(void)
+static bool __init io_apic_pin_not_connected(int idx, int apic_id, int pin)
{
- int apic_id, pin, idx, irq, notcon = 0;
- int node = cpu_to_node(0);
- struct irq_cfg *cfg;
+ if (idx != -1)
+ return false;
- apic_printk(APIC_VERBOSE, KERN_DEBUG "init IO_APIC IRQs\n");
+ apic_printk(APIC_VERBOSE, KERN_DEBUG " apic %d pin %d not connected\n",
+ mp_ioapics[apic_id].apicid, pin);
+ return true;
+}
+
+static void __init __io_apic_setup_irqs(unsigned int apic_id)
+{
+ int idx, node = cpu_to_node(0);
+ struct io_apic_irq_attr attr;
+ unsigned int pin, irq;
- for (apic_id = 0; apic_id < nr_ioapics; apic_id++)
for (pin = 0; pin < nr_ioapic_registers[apic_id]; pin++) {
idx = find_irq_entry(apic_id, pin, mp_INT);
- if (idx == -1) {
- if (!notcon) {
- notcon = 1;
- apic_printk(APIC_VERBOSE,
- KERN_DEBUG " %d-%d",
- mp_ioapics[apic_id].apicid, pin);
- } else
- apic_printk(APIC_VERBOSE, " %d-%d",
- mp_ioapics[apic_id].apicid, pin);
+ if (io_apic_pin_not_connected(idx, apic_id, pin))
continue;
- }
- if (notcon) {
- apic_printk(APIC_VERBOSE,
- " (apicid-pin) not connected\n");
- notcon = 0;
- }
irq = pin_2_irq(idx, apic_id, pin);
@@ -1423,25 +1404,24 @@ static void __init setup_IO_APIC_irqs(void)
* installed and if it returns 1:
*/
if (apic->multi_timer_check &&
- apic->multi_timer_check(apic_id, irq))
+ apic->multi_timer_check(apic_id, irq))
continue;
- cfg = alloc_irq_and_cfg_at(irq, node);
- if (!cfg)
- continue;
+ set_io_apic_irq_attr(&attr, apic_id, pin, irq_trigger(idx),
+ irq_polarity(idx));
- add_pin_to_irq_node(cfg, node, apic_id, pin);
- /*
- * don't mark it in pin_programmed, so later acpi could
- * set it correctly when irq < 16
- */
- setup_ioapic_irq(apic_id, pin, irq, cfg, irq_trigger(idx),
- irq_polarity(idx));
+ io_apic_setup_irq_pin(irq, node, &attr);
}
+}
- if (notcon)
- apic_printk(APIC_VERBOSE,
- " (apicid-pin) not connected\n");
+static void __init setup_IO_APIC_irqs(void)
+{
+ unsigned int apic_id;
+
+ apic_printk(APIC_VERBOSE, KERN_DEBUG "init IO_APIC IRQs\n");
+
+ for (apic_id = 0; apic_id < nr_ioapics; apic_id++)
+ __io_apic_setup_irqs(apic_id);
}
/*
@@ -1452,7 +1432,7 @@ static void __init setup_IO_APIC_irqs(void)
void setup_IO_APIC_irq_extra(u32 gsi)
{
int apic_id = 0, pin, idx, irq, node = cpu_to_node(0);
- struct irq_cfg *cfg;
+ struct io_apic_irq_attr attr;
/*
* Convert 'gsi' to 'ioapic.pin'.
@@ -1472,21 +1452,10 @@ void setup_IO_APIC_irq_extra(u32 gsi)
if (apic_id == 0 || irq < NR_IRQS_LEGACY)
return;
- cfg = alloc_irq_and_cfg_at(irq, node);
- if (!cfg)
- return;
-
- add_pin_to_irq_node(cfg, node, apic_id, pin);
-
- if (test_bit(pin, mp_ioapic_routing[apic_id].pin_programmed)) {
- pr_debug("Pin %d-%d already programmed\n",
- mp_ioapics[apic_id].apicid, pin);
- return;
- }
- set_bit(pin, mp_ioapic_routing[apic_id].pin_programmed);
+ set_io_apic_irq_attr(&attr, apic_id, pin, irq_trigger(idx),
+ irq_polarity(idx));
- setup_ioapic_irq(apic_id, pin, irq, cfg,
- irq_trigger(idx), irq_polarity(idx));
+ io_apic_setup_irq_pin_once(irq, node, &attr);
}
/*
@@ -1518,7 +1487,8 @@ static void __init setup_timer_IRQ0_pin(unsigned int apic_id, unsigned int pin,
* The timer IRQ doesn't have to know that behind the
* scene we may have a 8259A-master in AEOI mode ...
*/
- set_irq_chip_and_handler_name(0, &ioapic_chip, handle_edge_irq, "edge");
+ irq_set_chip_and_handler_name(0, &ioapic_chip, handle_edge_irq,
+ "edge");
/*
* Add it to the IO-APIC irq-routing table:
@@ -1625,7 +1595,7 @@ __apicdebuginit(void) print_IO_APIC(void)
for_each_active_irq(irq) {
struct irq_pin_list *entry;
- cfg = get_irq_chip_data(irq);
+ cfg = irq_get_chip_data(irq);
if (!cfg)
continue;
entry = cfg->irq_2_pin;
@@ -2391,7 +2361,7 @@ static void irq_complete_move(struct irq_cfg *cfg)
void irq_force_complete_move(int irq)
{
- struct irq_cfg *cfg = get_irq_chip_data(irq);
+ struct irq_cfg *cfg = irq_get_chip_data(irq);
if (!cfg)
return;
@@ -2405,7 +2375,7 @@ static inline void irq_complete_move(struct irq_cfg *cfg) { }
static void ack_apic_edge(struct irq_data *data)
{
irq_complete_move(data->chip_data);
- move_native_irq(data->irq);
+ irq_move_irq(data);
ack_APIC_irq();
}
@@ -2462,7 +2432,7 @@ static void ack_apic_level(struct irq_data *data)
irq_complete_move(cfg);
#ifdef CONFIG_GENERIC_PENDING_IRQ
/* If we are moving the irq we need to mask it */
- if (unlikely(irq_to_desc(irq)->status & IRQ_MOVE_PENDING)) {
+ if (unlikely(irqd_is_setaffinity_pending(data))) {
do_unmask_irq = 1;
mask_ioapic(cfg);
}
@@ -2551,7 +2521,7 @@ static void ack_apic_level(struct irq_data *data)
* and you can go talk to the chipset vendor about it.
*/
if (!io_apic_level_ack_pending(cfg))
- move_masked_irq(irq);
+ irq_move_masked_irq(data);
unmask_ioapic(cfg);
}
}
@@ -2614,7 +2584,7 @@ static inline void init_IO_APIC_traps(void)
* 0x80, because int 0x80 is hm, kind of importantish. ;)
*/
for_each_active_irq(irq) {
- cfg = get_irq_chip_data(irq);
+ cfg = irq_get_chip_data(irq);
if (IO_APIC_IRQ(irq) && cfg && !cfg->vector) {
/*
* Hmm.. We don't have an entry for this,
@@ -2625,7 +2595,7 @@ static inline void init_IO_APIC_traps(void)
legacy_pic->make_irq(irq);
else
/* Strange. Oh, well.. */
- set_irq_chip(irq, &no_irq_chip);
+ irq_set_chip(irq, &no_irq_chip);
}
}
}
@@ -2665,7 +2635,7 @@ static struct irq_chip lapic_chip __read_mostly = {
static void lapic_register_intr(int irq)
{
irq_clear_status_flags(irq, IRQ_LEVEL);
- set_irq_chip_and_handler_name(irq, &lapic_chip, handle_edge_irq,
+ irq_set_chip_and_handler_name(irq, &lapic_chip, handle_edge_irq,
"edge");
}
@@ -2749,7 +2719,7 @@ int timer_through_8259 __initdata;
*/
static inline void __init check_timer(void)
{
- struct irq_cfg *cfg = get_irq_chip_data(0);
+ struct irq_cfg *cfg = irq_get_chip_data(0);
int node = cpu_to_node(0);
int apic1, pin1, apic2, pin2;
unsigned long flags;
@@ -3060,7 +3030,7 @@ unsigned int create_irq_nr(unsigned int from, int node)
raw_spin_unlock_irqrestore(&vector_lock, flags);
if (ret) {
- set_irq_chip_data(irq, cfg);
+ irq_set_chip_data(irq, cfg);
irq_clear_status_flags(irq, IRQ_NOREQUEST);
} else {
free_irq_at(irq, cfg);
@@ -3085,7 +3055,7 @@ int create_irq(void)
void destroy_irq(unsigned int irq)
{
- struct irq_cfg *cfg = get_irq_chip_data(irq);
+ struct irq_cfg *cfg = irq_get_chip_data(irq);
unsigned long flags;
irq_set_status_flags(irq, IRQ_NOREQUEST|IRQ_NOPROBE);
@@ -3119,7 +3089,7 @@ static int msi_compose_msg(struct pci_dev *pdev, unsigned int irq,
dest = apic->cpu_mask_to_apicid_and(cfg->domain, apic->target_cpus());
- if (irq_remapped(get_irq_chip_data(irq))) {
+ if (irq_remapped(cfg)) {
struct irte irte;
int ir_index;
u16 sub_handle;
@@ -3291,6 +3261,7 @@ static int msi_alloc_irte(struct pci_dev *dev, int irq, int nvec)
static int setup_msi_irq(struct pci_dev *dev, struct msi_desc *msidesc, int irq)
{
+ struct irq_chip *chip = &msi_chip;
struct msi_msg msg;
int ret;
@@ -3298,14 +3269,15 @@ static int setup_msi_irq(struct pci_dev *dev, struct msi_desc *msidesc, int irq)
if (ret < 0)
return ret;
- set_irq_msi(irq, msidesc);
+ irq_set_msi_desc(irq, msidesc);
write_msi_msg(irq, &msg);
- if (irq_remapped(get_irq_chip_data(irq))) {
+ if (irq_remapped(irq_get_chip_data(irq))) {
irq_set_status_flags(irq, IRQ_MOVE_PCNTXT);
- set_irq_chip_and_handler_name(irq, &msi_ir_chip, handle_edge_irq, "edge");
- } else
- set_irq_chip_and_handler_name(irq, &msi_chip, handle_edge_irq, "edge");
+ chip = &msi_ir_chip;
+ }
+
+ irq_set_chip_and_handler_name(irq, chip, handle_edge_irq, "edge");
dev_printk(KERN_DEBUG, &dev->dev, "irq %d for MSI/MSI-X\n", irq);
@@ -3423,8 +3395,8 @@ int arch_setup_dmar_msi(unsigned int irq)
if (ret < 0)
return ret;
dmar_msi_write(irq, &msg);
- set_irq_chip_and_handler_name(irq, &dmar_msi_type, handle_edge_irq,
- "edge");
+ irq_set_chip_and_handler_name(irq, &dmar_msi_type, handle_edge_irq,
+ "edge");
return 0;
}
#endif
@@ -3482,6 +3454,7 @@ static struct irq_chip hpet_msi_type = {
int arch_setup_hpet_msi(unsigned int irq, unsigned int id)
{
+ struct irq_chip *chip = &hpet_msi_type;
struct msi_msg msg;
int ret;
@@ -3501,15 +3474,12 @@ int arch_setup_hpet_msi(unsigned int irq, unsigned int id)
if (ret < 0)
return ret;
- hpet_msi_write(get_irq_data(irq), &msg);
+ hpet_msi_write(irq_get_handler_data(irq), &msg);
irq_set_status_flags(irq, IRQ_MOVE_PCNTXT);
- if (irq_remapped(get_irq_chip_data(irq)))
- set_irq_chip_and_handler_name(irq, &ir_hpet_msi_type,
- handle_edge_irq, "edge");
- else
- set_irq_chip_and_handler_name(irq, &hpet_msi_type,
- handle_edge_irq, "edge");
+ if (irq_remapped(irq_get_chip_data(irq)))
+ chip = &ir_hpet_msi_type;
+ irq_set_chip_and_handler_name(irq, chip, handle_edge_irq, "edge");
return 0;
}
#endif
@@ -3596,7 +3566,7 @@ int arch_setup_ht_irq(unsigned int irq, struct pci_dev *dev)
write_ht_irq_msg(irq, &msg);
- set_irq_chip_and_handler_name(irq, &ht_irq_chip,
+ irq_set_chip_and_handler_name(irq, &ht_irq_chip,
handle_edge_irq, "edge");
dev_printk(KERN_DEBUG, &dev->dev, "irq %d for HT\n", irq);
@@ -3605,7 +3575,40 @@ int arch_setup_ht_irq(unsigned int irq, struct pci_dev *dev)
}
#endif /* CONFIG_HT_IRQ */
-int __init io_apic_get_redir_entries (int ioapic)
+int
+io_apic_setup_irq_pin(unsigned int irq, int node, struct io_apic_irq_attr *attr)
+{
+ struct irq_cfg *cfg = alloc_irq_and_cfg_at(irq, node);
+ int ret;
+
+ if (!cfg)
+ return -EINVAL;
+ ret = __add_pin_to_irq_node(cfg, node, attr->ioapic, attr->ioapic_pin);
+ if (!ret)
+ setup_ioapic_irq(attr->ioapic, attr->ioapic_pin, irq, cfg,
+ attr->trigger, attr->polarity);
+ return ret;
+}
+
+static int io_apic_setup_irq_pin_once(unsigned int irq, int node,
+ struct io_apic_irq_attr *attr)
+{
+ unsigned int id = attr->ioapic, pin = attr->ioapic_pin;
+ int ret;
+
+ /* Avoid redundant programming */
+ if (test_bit(pin, mp_ioapic_routing[id].pin_programmed)) {
+ pr_debug("Pin %d-%d already programmed\n",
+ mp_ioapics[id].apicid, pin);
+ return 0;
+ }
+ ret = io_apic_setup_irq_pin(irq, node, attr);
+ if (!ret)
+ set_bit(pin, mp_ioapic_routing[id].pin_programmed);
+ return ret;
+}
+
+static int __init io_apic_get_redir_entries(int ioapic)
{
union IO_APIC_reg_01 reg_01;
unsigned long flags;
@@ -3659,96 +3662,24 @@ int __init arch_probe_nr_irqs(void)
}
#endif
-static int __io_apic_set_pci_routing(struct device *dev, int irq,
- struct io_apic_irq_attr *irq_attr)
+int io_apic_set_pci_routing(struct device *dev, int irq,
+ struct io_apic_irq_attr *irq_attr)
{
- struct irq_cfg *cfg;
int node;
- int ioapic, pin;
- int trigger, polarity;
- ioapic = irq_attr->ioapic;
if (!IO_APIC_IRQ(irq)) {
apic_printk(APIC_QUIET,KERN_ERR "IOAPIC[%d]: Invalid reference to IRQ 0\n",
- ioapic);
+ irq_attr->ioapic);
return -EINVAL;
}
- if (dev)
- node = dev_to_node(dev);
- else
- node = cpu_to_node(0);
-
- cfg = alloc_irq_and_cfg_at(irq, node);
- if (!cfg)
- return 0;
-
- pin = irq_attr->ioapic_pin;
- trigger = irq_attr->trigger;
- polarity = irq_attr->polarity;
+ node = dev ? dev_to_node(dev) : cpu_to_node(0);
- /*
- * IRQs < 16 are already in the irq_2_pin[] map
- */
- if (irq >= legacy_pic->nr_legacy_irqs) {
- if (__add_pin_to_irq_node(cfg, node, ioapic, pin)) {
- printk(KERN_INFO "can not add pin %d for irq %d\n",
- pin, irq);
- return 0;
- }
- }
-
- setup_ioapic_irq(ioapic, pin, irq, cfg, trigger, polarity);
-
- return 0;
+ return io_apic_setup_irq_pin_once(irq, node, irq_attr);
}
-int io_apic_set_pci_routing(struct device *dev, int irq,
- struct io_apic_irq_attr *irq_attr)
-{
- int ioapic, pin;
- /*
- * Avoid pin reprogramming. PRTs typically include entries
- * with redundant pin->gsi mappings (but unique PCI devices);
- * we only program the IOAPIC on the first.
- */
- ioapic = irq_attr->ioapic;
- pin = irq_attr->ioapic_pin;
- if (test_bit(pin, mp_ioapic_routing[ioapic].pin_programmed)) {
- pr_debug("Pin %d-%d already programmed\n",
- mp_ioapics[ioapic].apicid, pin);
- return 0;
- }
- set_bit(pin, mp_ioapic_routing[ioapic].pin_programmed);
-
- return __io_apic_set_pci_routing(dev, irq, irq_attr);
-}
-
-u8 __init io_apic_unique_id(u8 id)
-{
#ifdef CONFIG_X86_32
- if ((boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) &&
- !APIC_XAPIC(apic_version[boot_cpu_physical_apicid]))
- return io_apic_get_unique_id(nr_ioapics, id);
- else
- return id;
-#else
- int i;
- DECLARE_BITMAP(used, 256);
-
- bitmap_zero(used, 256);
- for (i = 0; i < nr_ioapics; i++) {
- struct mpc_ioapic *ia = &mp_ioapics[i];
- __set_bit(ia->apicid, used);
- }
- if (!test_bit(id, used))
- return id;
- return find_first_zero_bit(used, 256);
-#endif
-}
-
-#ifdef CONFIG_X86_32
-int __init io_apic_get_unique_id(int ioapic, int apic_id)
+static int __init io_apic_get_unique_id(int ioapic, int apic_id)
{
union IO_APIC_reg_00 reg_00;
static physid_mask_t apic_id_map = PHYSID_MASK_NONE;
@@ -3821,9 +3752,33 @@ int __init io_apic_get_unique_id(int ioapic, int apic_id)
return apic_id;
}
+
+static u8 __init io_apic_unique_id(u8 id)
+{
+ if ((boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) &&
+ !APIC_XAPIC(apic_version[boot_cpu_physical_apicid]))
+ return io_apic_get_unique_id(nr_ioapics, id);
+ else
+ return id;
+}
+#else
+static u8 __init io_apic_unique_id(u8 id)
+{
+ int i;
+ DECLARE_BITMAP(used, 256);
+
+ bitmap_zero(used, 256);
+ for (i = 0; i < nr_ioapics; i++) {
+ struct mpc_ioapic *ia = &mp_ioapics[i];
+ __set_bit(ia->apicid, used);
+ }
+ if (!test_bit(id, used))
+ return id;
+ return find_first_zero_bit(used, 256);
+}
#endif
-int __init io_apic_get_version(int ioapic)
+static int __init io_apic_get_version(int ioapic)
{
union IO_APIC_reg_01 reg_01;
unsigned long flags;
@@ -3868,8 +3823,8 @@ int acpi_get_override_irq(u32 gsi, int *trigger, int *polarity)
void __init setup_ioapic_dest(void)
{
int pin, ioapic, irq, irq_entry;
- struct irq_desc *desc;
const struct cpumask *mask;
+ struct irq_data *idata;
if (skip_ioapic_setup == 1)
return;
@@ -3884,21 +3839,20 @@ void __init setup_ioapic_dest(void)
if ((ioapic > 0) && (irq > 16))
continue;
- desc = irq_to_desc(irq);
+ idata = irq_get_irq_data(irq);
/*
* Honour affinities which have been set in early boot
*/
- if (desc->status &
- (IRQ_NO_BALANCING | IRQ_AFFINITY_SET))
- mask = desc->irq_data.affinity;
+ if (!irqd_can_balance(idata) || irqd_affinity_was_set(idata))
+ mask = idata->affinity;
else
mask = apic->target_cpus();
if (intr_remapping_enabled)
- ir_ioapic_set_affinity(&desc->irq_data, mask, false);
+ ir_ioapic_set_affinity(idata, mask, false);
else
- ioapic_set_affinity(&desc->irq_data, mask, false);
+ ioapic_set_affinity(idata, mask, false);
}
}
@@ -4026,7 +3980,7 @@ int mp_find_ioapic_pin(int ioapic, u32 gsi)
return gsi - mp_gsi_routing[ioapic].gsi_base;
}
-static int bad_ioapic(unsigned long address)
+static __init int bad_ioapic(unsigned long address)
{
if (nr_ioapics >= MAX_IO_APICS) {
printk(KERN_WARNING "WARING: Max # of I/O APICs (%d) exceeded "
@@ -4086,20 +4040,16 @@ void __init mp_register_ioapic(int id, u32 address, u32 gsi_base)
/* Enable IOAPIC early just for system timer */
void __init pre_init_apic_IRQ0(void)
{
- struct irq_cfg *cfg;
+ struct io_apic_irq_attr attr = { 0, 0, 0, 0 };
printk(KERN_INFO "Early APIC setup for system timer0\n");
#ifndef CONFIG_SMP
physid_set_mask_of_physid(boot_cpu_physical_apicid,
&phys_cpu_present_map);
#endif
- /* Make sure the irq descriptor is set up */
- cfg = alloc_irq_and_cfg_at(0, 0);
-
setup_local_APIC();
- add_pin_to_irq_node(cfg, 0, 0, 0);
- set_irq_chip_and_handler_name(0, &ioapic_chip, handle_edge_irq, "edge");
-
- setup_ioapic_irq(0, 0, 0, cfg, 0, 0);
+ io_apic_setup_irq_pin(0, 0, &attr);
+ irq_set_chip_and_handler_name(0, &ioapic_chip, handle_edge_irq,
+ "edge");
}
diff --git a/trunk/arch/x86/kernel/apic/ipi.c b/trunk/arch/x86/kernel/apic/ipi.c
index 08385e090a6f..cce91bf26676 100644
--- a/trunk/arch/x86/kernel/apic/ipi.c
+++ b/trunk/arch/x86/kernel/apic/ipi.c
@@ -56,6 +56,8 @@ void default_send_IPI_mask_allbutself_phys(const struct cpumask *mask,
local_irq_restore(flags);
}
+#ifdef CONFIG_X86_32
+
void default_send_IPI_mask_sequence_logical(const struct cpumask *mask,
int vector)
{
@@ -71,8 +73,8 @@ void default_send_IPI_mask_sequence_logical(const struct cpumask *mask,
local_irq_save(flags);
for_each_cpu(query_cpu, mask)
__default_send_IPI_dest_field(
- apic->cpu_to_logical_apicid(query_cpu), vector,
- apic->dest_logical);
+ early_per_cpu(x86_cpu_to_logical_apicid, query_cpu),
+ vector, apic->dest_logical);
local_irq_restore(flags);
}
@@ -90,14 +92,12 @@ void default_send_IPI_mask_allbutself_logical(const struct cpumask *mask,
if (query_cpu == this_cpu)
continue;
__default_send_IPI_dest_field(
- apic->cpu_to_logical_apicid(query_cpu), vector,
- apic->dest_logical);
+ early_per_cpu(x86_cpu_to_logical_apicid, query_cpu),
+ vector, apic->dest_logical);
}
local_irq_restore(flags);
}
-#ifdef CONFIG_X86_32
-
/*
* This is only used on smaller machines.
*/
diff --git a/trunk/arch/x86/kernel/apic/numaq_32.c b/trunk/arch/x86/kernel/apic/numaq_32.c
index 960f26ab5c9f..6273eee5134b 100644
--- a/trunk/arch/x86/kernel/apic/numaq_32.c
+++ b/trunk/arch/x86/kernel/apic/numaq_32.c
@@ -373,13 +373,6 @@ static inline void numaq_ioapic_phys_id_map(physid_mask_t *phys_map, physid_mask
return physids_promote(0xFUL, retmap);
}
-static inline int numaq_cpu_to_logical_apicid(int cpu)
-{
- if (cpu >= nr_cpu_ids)
- return BAD_APICID;
- return cpu_2_logical_apicid[cpu];
-}
-
/*
* Supporting over 60 cpus on NUMA-Q requires a locality-dependent
* cpu to APIC ID relation to properly interact with the intelligent
@@ -398,6 +391,15 @@ static inline int numaq_apicid_to_node(int logical_apicid)
return logical_apicid >> 4;
}
+static int numaq_numa_cpu_node(int cpu)
+{
+ int logical_apicid = early_per_cpu(x86_cpu_to_logical_apicid, cpu);
+
+ if (logical_apicid != BAD_APICID)
+ return numaq_apicid_to_node(logical_apicid);
+ return NUMA_NO_NODE;
+}
+
static void numaq_apicid_to_cpu_present(int logical_apicid, physid_mask_t *retmap)
{
int node = numaq_apicid_to_node(logical_apicid);
@@ -508,8 +510,6 @@ struct apic __refdata apic_numaq = {
.ioapic_phys_id_map = numaq_ioapic_phys_id_map,
.setup_apic_routing = numaq_setup_apic_routing,
.multi_timer_check = numaq_multi_timer_check,
- .apicid_to_node = numaq_apicid_to_node,
- .cpu_to_logical_apicid = numaq_cpu_to_logical_apicid,
.cpu_present_to_apicid = numaq_cpu_present_to_apicid,
.apicid_to_cpu_present = numaq_apicid_to_cpu_present,
.setup_portio_remap = numaq_setup_portio_remap,
@@ -547,4 +547,7 @@ struct apic __refdata apic_numaq = {
.icr_write = native_apic_icr_write,
.wait_icr_idle = native_apic_wait_icr_idle,
.safe_wait_icr_idle = native_safe_apic_wait_icr_idle,
+
+ .x86_32_early_logical_apicid = noop_x86_32_early_logical_apicid,
+ .x86_32_numa_cpu_node = numaq_numa_cpu_node,
};
diff --git a/trunk/arch/x86/kernel/apic/probe_32.c b/trunk/arch/x86/kernel/apic/probe_32.c
index 99d2fe016084..fc84c7b61108 100644
--- a/trunk/arch/x86/kernel/apic/probe_32.c
+++ b/trunk/arch/x86/kernel/apic/probe_32.c
@@ -77,6 +77,11 @@ void __init default_setup_apic_routing(void)
apic->setup_apic_routing();
}
+static int default_x86_32_early_logical_apicid(int cpu)
+{
+ return 1 << cpu;
+}
+
static void setup_apic_flat_routing(void)
{
#ifdef CONFIG_X86_IO_APIC
@@ -130,8 +135,6 @@ struct apic apic_default = {
.ioapic_phys_id_map = default_ioapic_phys_id_map,
.setup_apic_routing = setup_apic_flat_routing,
.multi_timer_check = NULL,
- .apicid_to_node = default_apicid_to_node,
- .cpu_to_logical_apicid = default_cpu_to_logical_apicid,
.cpu_present_to_apicid = default_cpu_present_to_apicid,
.apicid_to_cpu_present = physid_set_mask_of_physid,
.setup_portio_remap = NULL,
@@ -167,6 +170,9 @@ struct apic apic_default = {
.icr_write = native_apic_icr_write,
.wait_icr_idle = native_apic_wait_icr_idle,
.safe_wait_icr_idle = native_safe_apic_wait_icr_idle,
+
+ .x86_32_early_logical_apicid = default_x86_32_early_logical_apicid,
+ .x86_32_numa_cpu_node = default_x86_32_numa_cpu_node,
};
extern struct apic apic_numaq;
diff --git a/trunk/arch/x86/kernel/apic/summit_32.c b/trunk/arch/x86/kernel/apic/summit_32.c
index 9b419263d90d..e4b8059b414a 100644
--- a/trunk/arch/x86/kernel/apic/summit_32.c
+++ b/trunk/arch/x86/kernel/apic/summit_32.c
@@ -194,11 +194,10 @@ static unsigned long summit_check_apicid_present(int bit)
return 1;
}
-static void summit_init_apic_ldr(void)
+static int summit_early_logical_apicid(int cpu)
{
- unsigned long val, id;
int count = 0;
- u8 my_id = (u8)hard_smp_processor_id();
+ u8 my_id = early_per_cpu(x86_cpu_to_apicid, cpu);
u8 my_cluster = APIC_CLUSTER(my_id);
#ifdef CONFIG_SMP
u8 lid;
@@ -206,7 +205,7 @@ static void summit_init_apic_ldr(void)
/* Create logical APIC IDs by counting CPUs already in cluster. */
for (count = 0, i = nr_cpu_ids; --i >= 0; ) {
- lid = cpu_2_logical_apicid[i];
+ lid = early_per_cpu(x86_cpu_to_logical_apicid, i);
if (lid != BAD_APICID && APIC_CLUSTER(lid) == my_cluster)
++count;
}
@@ -214,7 +213,15 @@ static void summit_init_apic_ldr(void)
/* We only have a 4 wide bitmap in cluster mode. If a deranged
* BIOS puts 5 CPUs in one APIC cluster, we're hosed. */
BUG_ON(count >= XAPIC_DEST_CPUS_SHIFT);
- id = my_cluster | (1UL << count);
+ return my_cluster | (1UL << count);
+}
+
+static void summit_init_apic_ldr(void)
+{
+ int cpu = smp_processor_id();
+ unsigned long id = early_per_cpu(x86_cpu_to_logical_apicid, cpu);
+ unsigned long val;
+
apic_write(APIC_DFR, SUMMIT_APIC_DFR_VALUE);
val = apic_read(APIC_LDR) & ~APIC_LDR_MASK;
val |= SET_APIC_LOGICAL_ID(id);
@@ -232,27 +239,6 @@ static void summit_setup_apic_routing(void)
nr_ioapics);
}
-static int summit_apicid_to_node(int logical_apicid)
-{
-#ifdef CONFIG_SMP
- return apicid_2_node[hard_smp_processor_id()];
-#else
- return 0;
-#endif
-}
-
-/* Mapping from cpu number to logical apicid */
-static inline int summit_cpu_to_logical_apicid(int cpu)
-{
-#ifdef CONFIG_SMP
- if (cpu >= nr_cpu_ids)
- return BAD_APICID;
- return cpu_2_logical_apicid[cpu];
-#else
- return logical_smp_processor_id();
-#endif
-}
-
static int summit_cpu_present_to_apicid(int mps_cpu)
{
if (mps_cpu < nr_cpu_ids)
@@ -286,7 +272,7 @@ static unsigned int summit_cpu_mask_to_apicid(const struct cpumask *cpumask)
* The cpus in the mask must all be on the apic cluster.
*/
for_each_cpu(cpu, cpumask) {
- int new_apicid = summit_cpu_to_logical_apicid(cpu);
+ int new_apicid = early_per_cpu(x86_cpu_to_logical_apicid, cpu);
if (round && APIC_CLUSTER(apicid) != APIC_CLUSTER(new_apicid)) {
printk("%s: Not a valid mask!\n", __func__);
@@ -301,7 +287,7 @@ static unsigned int summit_cpu_mask_to_apicid(const struct cpumask *cpumask)
static unsigned int summit_cpu_mask_to_apicid_and(const struct cpumask *inmask,
const struct cpumask *andmask)
{
- int apicid = summit_cpu_to_logical_apicid(0);
+ int apicid = early_per_cpu(x86_cpu_to_logical_apicid, 0);
cpumask_var_t cpumask;
if (!alloc_cpumask_var(&cpumask, GFP_ATOMIC))
@@ -528,8 +514,6 @@ struct apic apic_summit = {
.ioapic_phys_id_map = summit_ioapic_phys_id_map,
.setup_apic_routing = summit_setup_apic_routing,
.multi_timer_check = NULL,
- .apicid_to_node = summit_apicid_to_node,
- .cpu_to_logical_apicid = summit_cpu_to_logical_apicid,
.cpu_present_to_apicid = summit_cpu_present_to_apicid,
.apicid_to_cpu_present = summit_apicid_to_cpu_present,
.setup_portio_remap = NULL,
@@ -565,4 +549,7 @@ struct apic apic_summit = {
.icr_write = native_apic_icr_write,
.wait_icr_idle = native_apic_wait_icr_idle,
.safe_wait_icr_idle = native_safe_apic_wait_icr_idle,
+
+ .x86_32_early_logical_apicid = summit_early_logical_apicid,
+ .x86_32_numa_cpu_node = default_x86_32_numa_cpu_node,
};
diff --git a/trunk/arch/x86/kernel/apic/x2apic_cluster.c b/trunk/arch/x86/kernel/apic/x2apic_cluster.c
index cf69c59f4910..90949bbd566d 100644
--- a/trunk/arch/x86/kernel/apic/x2apic_cluster.c
+++ b/trunk/arch/x86/kernel/apic/x2apic_cluster.c
@@ -206,8 +206,6 @@ struct apic apic_x2apic_cluster = {
.ioapic_phys_id_map = NULL,
.setup_apic_routing = NULL,
.multi_timer_check = NULL,
- .apicid_to_node = NULL,
- .cpu_to_logical_apicid = NULL,
.cpu_present_to_apicid = default_cpu_present_to_apicid,
.apicid_to_cpu_present = NULL,
.setup_portio_remap = NULL,
diff --git a/trunk/arch/x86/kernel/apic/x2apic_phys.c b/trunk/arch/x86/kernel/apic/x2apic_phys.c
index 8972f38c5ced..c7e6d6645bf4 100644
--- a/trunk/arch/x86/kernel/apic/x2apic_phys.c
+++ b/trunk/arch/x86/kernel/apic/x2apic_phys.c
@@ -195,8 +195,6 @@ struct apic apic_x2apic_phys = {
.ioapic_phys_id_map = NULL,
.setup_apic_routing = NULL,
.multi_timer_check = NULL,
- .apicid_to_node = NULL,
- .cpu_to_logical_apicid = NULL,
.cpu_present_to_apicid = default_cpu_present_to_apicid,
.apicid_to_cpu_present = NULL,
.setup_portio_remap = NULL,
diff --git a/trunk/arch/x86/kernel/apic/x2apic_uv_x.c b/trunk/arch/x86/kernel/apic/x2apic_uv_x.c
index bd16b58b8850..3c289281394c 100644
--- a/trunk/arch/x86/kernel/apic/x2apic_uv_x.c
+++ b/trunk/arch/x86/kernel/apic/x2apic_uv_x.c
@@ -338,8 +338,6 @@ struct apic __refdata apic_x2apic_uv_x = {
.ioapic_phys_id_map = NULL,
.setup_apic_routing = NULL,
.multi_timer_check = NULL,
- .apicid_to_node = NULL,
- .cpu_to_logical_apicid = NULL,
.cpu_present_to_apicid = default_cpu_present_to_apicid,
.apicid_to_cpu_present = NULL,
.setup_portio_remap = NULL,
diff --git a/trunk/arch/x86/kernel/apm_32.c b/trunk/arch/x86/kernel/apm_32.c
index b929108eb58f..9079926a5b18 100644
--- a/trunk/arch/x86/kernel/apm_32.c
+++ b/trunk/arch/x86/kernel/apm_32.c
@@ -227,6 +227,7 @@
#include
#include
#include
+#include
#include
#include
@@ -2321,12 +2322,11 @@ static int __init apm_init(void)
apm_info.disabled = 1;
return -ENODEV;
}
- if (pm_flags & PM_ACPI) {
+ if (!acpi_disabled) {
printk(KERN_NOTICE "apm: overridden by ACPI.\n");
apm_info.disabled = 1;
return -ENODEV;
}
- pm_flags |= PM_APM;
/*
* Set up the long jump entry point to the APM BIOS, which is called
@@ -2418,7 +2418,6 @@ static void __exit apm_exit(void)
kthread_stop(kapmd_task);
kapmd_task = NULL;
}
- pm_flags &= ~PM_APM;
}
module_init(apm_init);
diff --git a/trunk/arch/x86/kernel/asm-offsets.c b/trunk/arch/x86/kernel/asm-offsets.c
index cfa82c899f47..4f13fafc5264 100644
--- a/trunk/arch/x86/kernel/asm-offsets.c
+++ b/trunk/arch/x86/kernel/asm-offsets.c
@@ -1,5 +1,70 @@
+/*
+ * Generate definitions needed by assembly language modules.
+ * This code generates raw asm output which is post-processed to extract
+ * and format the required data.
+ */
+#define COMPILE_OFFSETS
+
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+#include