Skip to content

Commit

Permalink
---
Browse files Browse the repository at this point in the history
yaml
---
r: 257119
b: refs/heads/master
c: 492f73a
h: refs/heads/master
i:
  257117: ace7f4f
  257115: c1bc787
  257111: 2626085
  257103: 3bc7e4c
  257087: f4a6645
v: v3
  • Loading branch information
Ingo Molnar committed Jul 21, 2011
1 parent 6ff7061 commit adf5456
Show file tree
Hide file tree
Showing 275 changed files with 3,760 additions and 2,420 deletions.
2 changes: 1 addition & 1 deletion [refs]
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
---
refs/heads/master: f7bc8b61f65726ff98f52e286b28e294499d7a08
refs/heads/master: 492f73a303b488ffd67097b2351d54aa6e6c7c73
67 changes: 14 additions & 53 deletions trunk/Documentation/power/devices.txt
Original file line number Diff line number Diff line change
Expand Up @@ -520,59 +520,20 @@ Support for power domains is provided through the pwr_domain field of struct
device. This field is a pointer to an object of type struct dev_power_domain,
defined in include/linux/pm.h, providing a set of power management callbacks
analogous to the subsystem-level and device driver callbacks that are executed
for the given device during all power transitions, in addition to the respective
subsystem-level callbacks. Specifically, the power domain "suspend" callbacks
(i.e. ->runtime_suspend(), ->suspend(), ->freeze(), ->poweroff(), etc.) are
executed after the analogous subsystem-level callbacks, while the power domain
"resume" callbacks (i.e. ->runtime_resume(), ->resume(), ->thaw(), ->restore,
etc.) are executed before the analogous subsystem-level callbacks. Error codes
returned by the "suspend" and "resume" power domain callbacks are ignored.

Power domain ->runtime_idle() callback is executed before the subsystem-level
->runtime_idle() callback and the result returned by it is not ignored. Namely,
if it returns error code, the subsystem-level ->runtime_idle() callback will not
be called and the helper function rpm_idle() executing it will return error
code. This mechanism is intended to help platforms where saving device state
is a time consuming operation and should only be carried out if all devices
in the power domain are idle, before turning off the shared power resource(s).
Namely, the power domain ->runtime_idle() callback may return error code until
the pm_runtime_idle() helper (or its asychronous version) has been called for
all devices in the power domain (it is recommended that the returned error code
be -EBUSY in those cases), preventing the subsystem-level ->runtime_idle()
callback from being run prematurely.

The support for device power domains is only relevant to platforms needing to
use the same subsystem-level (e.g. platform bus type) and device driver power
management callbacks in many different power domain configurations and wanting
to avoid incorporating the support for power domains into the subsystem-level
callbacks. The other platforms need not implement it or take it into account
in any way.


System Devices
--------------
System devices (sysdevs) follow a slightly different API, which can be found in

include/linux/sysdev.h
drivers/base/sys.c

System devices will be suspended with interrupts disabled, and after all other
devices have been suspended. On resume, they will be resumed before any other
devices, and also with interrupts disabled. These things occur in special
"sysdev_driver" phases, which affect only system devices.

Thus, after the suspend_noirq (or freeze_noirq or poweroff_noirq) phase, when
the non-boot CPUs are all offline and IRQs are disabled on the remaining online
CPU, then a sysdev_driver.suspend phase is carried out, and the system enters a
sleep state (or a system image is created). During resume (or after the image
has been created or loaded) a sysdev_driver.resume phase is carried out, IRQs
are enabled on the only online CPU, the non-boot CPUs are enabled, and the
resume_noirq (or thaw_noirq or restore_noirq) phase begins.

Code to actually enter and exit the system-wide low power state sometimes
involves hardware details that are only known to the boot firmware, and
may leave a CPU running software (from SRAM or flash memory) that monitors
the system and manages its wakeup sequence.
for the given device during all power transitions, instead of the respective
subsystem-level callbacks. Specifically, if a device's pm_domain pointer is
not NULL, the ->suspend() callback from the object pointed to by it will be
executed instead of its subsystem's (e.g. bus type's) ->suspend() callback and
anlogously for all of the remaining callbacks. In other words, power management
domain callbacks, if defined for the given device, always take precedence over
the callbacks provided by the device's subsystem (e.g. bus type).

The support for device power management domains is only relevant to platforms
needing to use the same device driver power management callbacks in many
different power domain configurations and wanting to avoid incorporating the
support for power domains into subsystem-level callbacks, for example by
modifying the platform bus type. Other platforms need not implement it or take
it into account in any way.


Device Low Power (suspend) States
Expand Down
5 changes: 0 additions & 5 deletions trunk/Documentation/power/runtime_pm.txt
Original file line number Diff line number Diff line change
Expand Up @@ -566,11 +566,6 @@ to do this is:
pm_runtime_set_active(dev);
pm_runtime_enable(dev);

The PM core always increments the run-time usage counter before calling the
->prepare() callback and decrements it after calling the ->complete() callback.
Hence disabling run-time PM temporarily like this will not cause any run-time
suspend callbacks to be lost.

7. Generic subsystem callbacks

Subsystems may wish to conserve code space by using the set of generic power
Expand Down
3 changes: 2 additions & 1 deletion trunk/Makefile
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
VERSION = 3
PATCHLEVEL = 0
SUBLEVEL = 0
EXTRAVERSION = -rc4
EXTRAVERSION = -rc5
NAME = Sneaky Weasel

# *DOCUMENTATION*
Expand Down Expand Up @@ -1290,6 +1290,7 @@ help:
@echo ' make O=dir [targets] Locate all output files in "dir", including .config'
@echo ' make C=1 [targets] Check all c source with $$CHECK (sparse by default)'
@echo ' make C=2 [targets] Force check of all c source with $$CHECK'
@echo ' make RECORDMCOUNT_WARN=1 [targets] Warn about ignored mcount sections'
@echo ' make W=n [targets] Enable extra gcc checks, n=1,2,3 where'
@echo ' 1: warnings which may be relevant and do not occur too often'
@echo ' 2: warnings which occur quite often but may still be relevant'
Expand Down
1 change: 0 additions & 1 deletion trunk/arch/alpha/include/asm/mmzone.h
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,6 @@ PLAT_NODE_DATA_LOCALNR(unsigned long p, int n)
* Given a kernel address, find the home node of the underlying memory.
*/
#define kvaddr_to_nid(kaddr) pa_to_nid(__pa(kaddr))
#define node_start_pfn(nid) (NODE_DATA(nid)->node_start_pfn)

/*
* Given a kaddr, LOCAL_BASE_ADDR finds the owning node of the memory
Expand Down
2 changes: 1 addition & 1 deletion trunk/arch/alpha/kernel/perf_event.c
Original file line number Diff line number Diff line change
Expand Up @@ -847,7 +847,7 @@ static void alpha_perf_event_irq_handler(unsigned long la_ptr,
data.period = event->hw.last_period;

if (alpha_perf_event_set_period(event, hwc, idx)) {
if (perf_event_overflow(event, 1, &data, regs)) {
if (perf_event_overflow(event, &data, regs)) {
/* Interrupts coming too quickly; "throttle" the
* counter, i.e., disable it for a little while.
*/
Expand Down
2 changes: 1 addition & 1 deletion trunk/arch/alpha/kernel/time.c
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ DEFINE_PER_CPU(u8, irq_work_pending);
#define test_irq_work_pending() __get_cpu_var(irq_work_pending)
#define clear_irq_work_pending() __get_cpu_var(irq_work_pending) = 0

void set_irq_work_pending(void)
void arch_irq_work_raise(void)
{
set_irq_work_pending_flag();
}
Expand Down
14 changes: 13 additions & 1 deletion trunk/arch/arm/boot/compressed/head.S
Original file line number Diff line number Diff line change
Expand Up @@ -597,6 +597,8 @@ __common_mmu_cache_on:
sub pc, lr, r0, lsr #32 @ properly flush pipeline
#endif

#define PROC_ENTRY_SIZE (4*5)

/*
* Here follow the relocatable cache support functions for the
* various processors. This is a generic hook for locating an
Expand Down Expand Up @@ -624,7 +626,7 @@ call_cache_fn: adr r12, proc_types
ARM( addeq pc, r12, r3 ) @ call cache function
THUMB( addeq r12, r3 )
THUMB( moveq pc, r12 ) @ call cache function
add r12, r12, #4*5
add r12, r12, #PROC_ENTRY_SIZE
b 1b

/*
Expand Down Expand Up @@ -794,6 +796,16 @@ proc_types:

.size proc_types, . - proc_types

/*
* If you get a "non-constant expression in ".if" statement"
* error from the assembler on this line, check that you have
* not accidentally written a "b" instruction where you should
* have written W(b).
*/
.if (. - proc_types) % PROC_ENTRY_SIZE != 0
.error "The size of one or more proc_types entries is wrong."
.endif

/*
* Turn off the Cache and MMU. ARMv3 does not support
* reading the control register, but ARMv4 does.
Expand Down
4 changes: 4 additions & 0 deletions trunk/arch/arm/include/asm/assembler.h
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,9 @@
* Do not include any C declarations in this file - it is included by
* assembler source.
*/
#ifndef __ASM_ASSEMBLER_H__
#define __ASM_ASSEMBLER_H__

#ifndef __ASSEMBLY__
#error "Only include this from assembly code"
#endif
Expand Down Expand Up @@ -290,3 +293,4 @@
.macro ldrusr, reg, ptr, inc, cond=al, rept=1, abort=9001f
usracc ldr, \reg, \ptr, \inc, \cond, \rept, \abort
.endm
#endif /* __ASM_ASSEMBLER_H__ */
2 changes: 2 additions & 0 deletions trunk/arch/arm/include/asm/entry-macro-multi.S
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
#include <asm/assembler.h>

/*
* Interrupt handling. Preserves r7, r8, r9
*/
Expand Down
13 changes: 11 additions & 2 deletions trunk/arch/arm/kernel/module.c
Original file line number Diff line number Diff line change
Expand Up @@ -193,8 +193,17 @@ apply_relocate(Elf32_Shdr *sechdrs, const char *strtab, unsigned int symindex,
offset -= 0x02000000;
offset += sym->st_value - loc;

/* only Thumb addresses allowed (no interworking) */
if (!(offset & 1) ||
/*
* For function symbols, only Thumb addresses are
* allowed (no interworking).
*
* For non-function symbols, the destination
* has no specific ARM/Thumb disposition, so
* the branch is resolved under the assumption
* that interworking is not required.
*/
if ((ELF32_ST_TYPE(sym->st_info) == STT_FUNC &&
!(offset & 1)) ||
offset <= (s32)0xff000000 ||
offset >= (s32)0x01000000) {
pr_err("%s: section %u reloc %u sym '%s': relocation %u out of range (%#lx -> %#x)\n",
Expand Down
30 changes: 29 additions & 1 deletion trunk/arch/arm/kernel/perf_event_v6.c
Original file line number Diff line number Diff line change
Expand Up @@ -173,6 +173,20 @@ static const unsigned armv6_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
},
[C(NODE)] = {
[C(OP_READ)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
[C(OP_WRITE)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
[C(OP_PREFETCH)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
},
};

enum armv6mpcore_perf_types {
Expand Down Expand Up @@ -310,6 +324,20 @@ static const unsigned armv6mpcore_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
},
[C(NODE)] = {
[C(OP_READ)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
[C(OP_WRITE)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
[C(OP_PREFETCH)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
},
};

static inline unsigned long
Expand Down Expand Up @@ -479,7 +507,7 @@ armv6pmu_handle_irq(int irq_num,
if (!armpmu_event_set_period(event, hwc, idx))
continue;

if (perf_event_overflow(event, 0, &data, regs))
if (perf_event_overflow(event, &data, regs))
armpmu->disable(hwc, idx);
}

Expand Down
30 changes: 29 additions & 1 deletion trunk/arch/arm/kernel/perf_event_v7.c
Original file line number Diff line number Diff line change
Expand Up @@ -255,6 +255,20 @@ static const unsigned armv7_a8_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
},
[C(NODE)] = {
[C(OP_READ)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
[C(OP_WRITE)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
[C(OP_PREFETCH)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
},
};

/*
Expand Down Expand Up @@ -371,6 +385,20 @@ static const unsigned armv7_a9_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
},
[C(NODE)] = {
[C(OP_READ)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
[C(OP_WRITE)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
[C(OP_PREFETCH)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
},
};

/*
Expand Down Expand Up @@ -787,7 +815,7 @@ static irqreturn_t armv7pmu_handle_irq(int irq_num, void *dev)
if (!armpmu_event_set_period(event, hwc, idx))
continue;

if (perf_event_overflow(event, 0, &data, regs))
if (perf_event_overflow(event, &data, regs))
armpmu->disable(hwc, idx);
}

Expand Down
18 changes: 16 additions & 2 deletions trunk/arch/arm/kernel/perf_event_xscale.c
Original file line number Diff line number Diff line change
Expand Up @@ -144,6 +144,20 @@ static const unsigned xscale_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
},
[C(NODE)] = {
[C(OP_READ)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
[C(OP_WRITE)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
[C(OP_PREFETCH)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
},
};

#define XSCALE_PMU_ENABLE 0x001
Expand Down Expand Up @@ -251,7 +265,7 @@ xscale1pmu_handle_irq(int irq_num, void *dev)
if (!armpmu_event_set_period(event, hwc, idx))
continue;

if (perf_event_overflow(event, 0, &data, regs))
if (perf_event_overflow(event, &data, regs))
armpmu->disable(hwc, idx);
}

Expand Down Expand Up @@ -583,7 +597,7 @@ xscale2pmu_handle_irq(int irq_num, void *dev)
if (!armpmu_event_set_period(event, hwc, idx))
continue;

if (perf_event_overflow(event, 0, &data, regs))
if (perf_event_overflow(event, &data, regs))
armpmu->disable(hwc, idx);
}

Expand Down
5 changes: 3 additions & 2 deletions trunk/arch/arm/kernel/ptrace.c
Original file line number Diff line number Diff line change
Expand Up @@ -396,7 +396,7 @@ static long ptrace_hbp_idx_to_num(int idx)
/*
* Handle hitting a HW-breakpoint.
*/
static void ptrace_hbptriggered(struct perf_event *bp, int unused,
static void ptrace_hbptriggered(struct perf_event *bp,
struct perf_sample_data *data,
struct pt_regs *regs)
{
Expand Down Expand Up @@ -479,7 +479,8 @@ static struct perf_event *ptrace_hbp_create(struct task_struct *tsk, int type)
attr.bp_type = type;
attr.disabled = 1;

return register_user_hw_breakpoint(&attr, ptrace_hbptriggered, tsk);
return register_user_hw_breakpoint(&attr, ptrace_hbptriggered, NULL,
tsk);
}

static int ptrace_gethbpregs(struct task_struct *tsk, long num,
Expand Down
6 changes: 5 additions & 1 deletion trunk/arch/arm/kernel/smp.c
Original file line number Diff line number Diff line change
Expand Up @@ -318,9 +318,13 @@ asmlinkage void __cpuinit secondary_start_kernel(void)
smp_store_cpu_info(cpu);

/*
* OK, now it's safe to let the boot CPU continue
* OK, now it's safe to let the boot CPU continue. Wait for
* the CPU migration code to notice that the CPU is online
* before we continue.
*/
set_cpu_online(cpu, true);
while (!cpu_active(cpu))
cpu_relax();

/*
* OK, it's off to the idle thread for us
Expand Down
2 changes: 1 addition & 1 deletion trunk/arch/arm/kernel/swp_emulate.c
Original file line number Diff line number Diff line change
Expand Up @@ -183,7 +183,7 @@ static int swp_handler(struct pt_regs *regs, unsigned int instr)
unsigned int address, destreg, data, type;
unsigned int res = 0;

perf_sw_event(PERF_COUNT_SW_EMULATION_FAULTS, 1, 0, regs, regs->ARM_pc);
perf_sw_event(PERF_COUNT_SW_EMULATION_FAULTS, 1, regs, regs->ARM_pc);

if (current->pid != previous_pid) {
pr_debug("\"%s\" (%ld) uses deprecated SWP{B} instruction\n",
Expand Down
Loading

0 comments on commit adf5456

Please sign in to comment.