diff --git a/[refs] b/[refs] index 4b47b91e7748..719bd3113a78 100644 --- a/[refs] +++ b/[refs] @@ -1,2 +1,2 @@ --- -refs/heads/master: c26d4114aac55b57078caf83e261621d22e4596d +refs/heads/master: 6f3c77b040fc24708228607bba504878de5236d1 diff --git a/trunk/Documentation/ABI/testing/sysfs-devices-system-cpu b/trunk/Documentation/ABI/testing/sysfs-devices-system-cpu index 6943133afcb8..5dab36448b44 100644 --- a/trunk/Documentation/ABI/testing/sysfs-devices-system-cpu +++ b/trunk/Documentation/ABI/testing/sysfs-devices-system-cpu @@ -176,14 +176,3 @@ Description: Disable L3 cache indices All AMD processors with L3 caches provide this functionality. For details, see BKDGs at http://developer.amd.com/documentation/guides/Pages/default.aspx - - -What: /sys/devices/system/cpu/cpufreq/boost -Date: August 2012 -Contact: Linux kernel mailing list -Description: Processor frequency boosting control - - This switch controls the boost setting for the whole system. - Boosting allows the CPU and the firmware to run at a frequency - beyound it's nominal limit. - More details can be found in Documentation/cpu-freq/boost.txt diff --git a/trunk/Documentation/cpu-freq/boost.txt b/trunk/Documentation/cpu-freq/boost.txt deleted file mode 100644 index 9b4edfcf486f..000000000000 --- a/trunk/Documentation/cpu-freq/boost.txt +++ /dev/null @@ -1,93 +0,0 @@ -Processor boosting control - - - information for users - - -Quick guide for the impatient: --------------------- -/sys/devices/system/cpu/cpufreq/boost -controls the boost setting for the whole system. You can read and write -that file with either "0" (boosting disabled) or "1" (boosting allowed). -Reading or writing 1 does not mean that the system is boosting at this -very moment, but only that the CPU _may_ raise the frequency at it's -discretion. --------------------- - -Introduction -------------- -Some CPUs support a functionality to raise the operating frequency of -some cores in a multi-core package if certain conditions apply, mostly -if the whole chip is not fully utilized and below it's intended thermal -budget. This is done without operating system control by a combination -of hardware and firmware. -On Intel CPUs this is called "Turbo Boost", AMD calls it "Turbo-Core", -in technical documentation "Core performance boost". In Linux we use -the term "boost" for convenience. - -Rationale for disable switch ----------------------------- - -Though the idea is to just give better performance without any user -intervention, sometimes the need arises to disable this functionality. -Most systems offer a switch in the (BIOS) firmware to disable the -functionality at all, but a more fine-grained and dynamic control would -be desirable: -1. While running benchmarks, reproducible results are important. Since - the boosting functionality depends on the load of the whole package, - single thread performance can vary. By explicitly disabling the boost - functionality at least for the benchmark's run-time the system will run - at a fixed frequency and results are reproducible again. -2. To examine the impact of the boosting functionality it is helpful - to do tests with and without boosting. -3. Boosting means overclocking the processor, though under controlled - conditions. By raising the frequency and the voltage the processor - will consume more power than without the boosting, which may be - undesirable for instance for mobile users. Disabling boosting may - save power here, though this depends on the workload. - - -User controlled switch ----------------------- - -To allow the user to toggle the boosting functionality, the acpi-cpufreq -driver exports a sysfs knob to disable it. There is a file: -/sys/devices/system/cpu/cpufreq/boost -which can either read "0" (boosting disabled) or "1" (boosting enabled). -Reading the file is always supported, even if the processor does not -support boosting. In this case the file will be read-only and always -reads as "0". Explicitly changing the permissions and writing to that -file anyway will return EINVAL. - -On supported CPUs one can write either a "0" or a "1" into this file. -This will either disable the boost functionality on all cores in the -whole system (0) or will allow the hardware to boost at will (1). - -Writing a "1" does not explicitly boost the system, but just allows the -CPU (and the firmware) to boost at their discretion. Some implementations -take external factors like the chip's temperature into account, so -boosting once does not necessarily mean that it will occur every time -even using the exact same software setup. - - -AMD legacy cpb switch ---------------------- -The AMD powernow-k8 driver used to support a very similar switch to -disable or enable the "Core Performance Boost" feature of some AMD CPUs. -This switch was instantiated in each CPU's cpufreq directory -(/sys/devices/system/cpu[0-9]*/cpufreq) and was called "cpb". -Though the per CPU existence hints at a more fine grained control, the -actual implementation only supported a system-global switch semantics, -which was simply reflected into each CPU's file. Writing a 0 or 1 into it -would pull the other CPUs to the same state. -For compatibility reasons this file and its behavior is still supported -on AMD CPUs, though it is now protected by a config switch -(X86_ACPI_CPUFREQ_CPB). On Intel CPUs this file will never be created, -even with the config option set. -This functionality is considered legacy and will be removed in some future -kernel version. - -More fine grained boosting control ----------------------------------- - -Technically it is possible to switch the boosting functionality at least -on a per package basis, for some CPUs even per core. Currently the driver -does not support it, but this may be implemented in the future. diff --git a/trunk/Documentation/cpuidle/sysfs.txt b/trunk/Documentation/cpuidle/sysfs.txt index b6f44f490ed7..9d28a3406e74 100644 --- a/trunk/Documentation/cpuidle/sysfs.txt +++ b/trunk/Documentation/cpuidle/sysfs.txt @@ -76,17 +76,9 @@ total 0 * desc : Small description about the idle state (string) -* disable : Option to disable this idle state (bool) -> see note below +* disable : Option to disable this idle state (bool) * latency : Latency to exit out of this idle state (in microseconds) * name : Name of the idle state (string) * power : Power consumed while in this idle state (in milliwatts) * time : Total time spent in this idle state (in microseconds) * usage : Number of times this state was entered (count) - -Note: -The behavior and the effect of the disable variable depends on the -implementation of a particular governor. In the ladder governor, for -example, it is not coherent, i.e. if one is disabling a light state, -then all deeper states are disabled as well, but the disable variable -does not reflect it. Likewise, if one enables a deep state but a lighter -state still is disabled, then this has no effect. diff --git a/trunk/Documentation/devicetree/bindings/cpufreq/cpufreq-cpu0.txt b/trunk/Documentation/devicetree/bindings/cpufreq/cpufreq-cpu0.txt deleted file mode 100644 index 4416ccc33472..000000000000 --- a/trunk/Documentation/devicetree/bindings/cpufreq/cpufreq-cpu0.txt +++ /dev/null @@ -1,55 +0,0 @@ -Generic CPU0 cpufreq driver - -It is a generic cpufreq driver for CPU0 frequency management. It -supports both uniprocessor (UP) and symmetric multiprocessor (SMP) -systems which share clock and voltage across all CPUs. - -Both required and optional properties listed below must be defined -under node /cpus/cpu@0. - -Required properties: -- operating-points: Refer to Documentation/devicetree/bindings/power/opp.txt - for details - -Optional properties: -- clock-latency: Specify the possible maximum transition latency for clock, - in unit of nanoseconds. -- voltage-tolerance: Specify the CPU voltage tolerance in percentage. - -Examples: - -cpus { - #address-cells = <1>; - #size-cells = <0>; - - cpu@0 { - compatible = "arm,cortex-a9"; - reg = <0>; - next-level-cache = <&L2>; - operating-points = < - /* kHz uV */ - 792000 1100000 - 396000 950000 - 198000 850000 - >; - transition-latency = <61036>; /* two CLK32 periods */ - }; - - cpu@1 { - compatible = "arm,cortex-a9"; - reg = <1>; - next-level-cache = <&L2>; - }; - - cpu@2 { - compatible = "arm,cortex-a9"; - reg = <2>; - next-level-cache = <&L2>; - }; - - cpu@3 { - compatible = "arm,cortex-a9"; - reg = <3>; - next-level-cache = <&L2>; - }; -}; diff --git a/trunk/Documentation/devicetree/bindings/power/opp.txt b/trunk/Documentation/devicetree/bindings/power/opp.txt deleted file mode 100644 index 74499e5033fc..000000000000 --- a/trunk/Documentation/devicetree/bindings/power/opp.txt +++ /dev/null @@ -1,25 +0,0 @@ -* Generic OPP Interface - -SoCs have a standard set of tuples consisting of frequency and -voltage pairs that the device will support per voltage domain. These -are called Operating Performance Points or OPPs. - -Properties: -- operating-points: An array of 2-tuples items, and each item consists - of frequency and voltage like . - freq: clock frequency in kHz - vol: voltage in microvolt - -Examples: - -cpu@0 { - compatible = "arm,cortex-a9"; - reg = <0>; - next-level-cache = <&L2>; - operating-points = < - /* kHz uV */ - 792000 1100000 - 396000 950000 - 198000 850000 - >; -}; diff --git a/trunk/arch/arm/kernel/smp.c b/trunk/arch/arm/kernel/smp.c index 8e03567c9583..ebd8ad274d76 100644 --- a/trunk/arch/arm/kernel/smp.c +++ b/trunk/arch/arm/kernel/smp.c @@ -25,7 +25,6 @@ #include #include #include -#include #include #include @@ -585,56 +584,3 @@ int setup_profiling_timer(unsigned int multiplier) { return -EINVAL; } - -#ifdef CONFIG_CPU_FREQ - -static DEFINE_PER_CPU(unsigned long, l_p_j_ref); -static DEFINE_PER_CPU(unsigned long, l_p_j_ref_freq); -static unsigned long global_l_p_j_ref; -static unsigned long global_l_p_j_ref_freq; - -static int cpufreq_callback(struct notifier_block *nb, - unsigned long val, void *data) -{ - struct cpufreq_freqs *freq = data; - int cpu = freq->cpu; - - if (freq->flags & CPUFREQ_CONST_LOOPS) - return NOTIFY_OK; - - if (!per_cpu(l_p_j_ref, cpu)) { - per_cpu(l_p_j_ref, cpu) = - per_cpu(cpu_data, cpu).loops_per_jiffy; - per_cpu(l_p_j_ref_freq, cpu) = freq->old; - if (!global_l_p_j_ref) { - global_l_p_j_ref = loops_per_jiffy; - global_l_p_j_ref_freq = freq->old; - } - } - - if ((val == CPUFREQ_PRECHANGE && freq->old < freq->new) || - (val == CPUFREQ_POSTCHANGE && freq->old > freq->new) || - (val == CPUFREQ_RESUMECHANGE || val == CPUFREQ_SUSPENDCHANGE)) { - loops_per_jiffy = cpufreq_scale(global_l_p_j_ref, - global_l_p_j_ref_freq, - freq->new); - per_cpu(cpu_data, cpu).loops_per_jiffy = - cpufreq_scale(per_cpu(l_p_j_ref, cpu), - per_cpu(l_p_j_ref_freq, cpu), - freq->new); - } - return NOTIFY_OK; -} - -static struct notifier_block cpufreq_notifier = { - .notifier_call = cpufreq_callback, -}; - -static int __init register_cpufreq_notifier(void) -{ - return cpufreq_register_notifier(&cpufreq_notifier, - CPUFREQ_TRANSITION_NOTIFIER); -} -core_initcall(register_cpufreq_notifier); - -#endif diff --git a/trunk/arch/arm/mach-shmobile/Makefile b/trunk/arch/arm/mach-shmobile/Makefile index fe2c97c179d1..0df5ae6740c6 100644 --- a/trunk/arch/arm/mach-shmobile/Makefile +++ b/trunk/arch/arm/mach-shmobile/Makefile @@ -3,7 +3,7 @@ # # Common objects -obj-y := timer.o console.o clock.o +obj-y := timer.o console.o clock.o common.o # CPU objects obj-$(CONFIG_ARCH_SH7367) += setup-sh7367.o clock-sh7367.o intc-sh7367.o diff --git a/trunk/arch/arm/mach-shmobile/board-ap4evb.c b/trunk/arch/arm/mach-shmobile/board-ap4evb.c index 264340a60f65..f172ca85905c 100644 --- a/trunk/arch/arm/mach-shmobile/board-ap4evb.c +++ b/trunk/arch/arm/mach-shmobile/board-ap4evb.c @@ -1229,15 +1229,6 @@ static struct i2c_board_info i2c1_devices[] = { #define USCCR1 0xE6058144 static void __init ap4evb_init(void) { - struct pm_domain_device domain_devices[] = { - { "A4LC", &lcdc1_device, }, - { "A4LC", &lcdc_device, }, - { "A4MP", &fsi_device, }, - { "A3SP", &sh_mmcif_device, }, - { "A3SP", &sdhi0_device, }, - { "A3SP", &sdhi1_device, }, - { "A4R", &ceu_device, }, - }; u32 srcr4; struct clk *clk; @@ -1470,8 +1461,14 @@ static void __init ap4evb_init(void) platform_add_devices(ap4evb_devices, ARRAY_SIZE(ap4evb_devices)); - rmobile_add_devices_to_domains(domain_devices, - ARRAY_SIZE(domain_devices)); + rmobile_add_device_to_domain(&sh7372_pd_a4lc, &lcdc1_device); + rmobile_add_device_to_domain(&sh7372_pd_a4lc, &lcdc_device); + rmobile_add_device_to_domain(&sh7372_pd_a4mp, &fsi_device); + + rmobile_add_device_to_domain(&sh7372_pd_a3sp, &sh_mmcif_device); + rmobile_add_device_to_domain(&sh7372_pd_a3sp, &sdhi0_device); + rmobile_add_device_to_domain(&sh7372_pd_a3sp, &sdhi1_device); + rmobile_add_device_to_domain(&sh7372_pd_a4r, &ceu_device); hdmi_init_pm_clock(); fsi_init_pm_clock(); @@ -1486,6 +1483,6 @@ MACHINE_START(AP4EVB, "ap4evb") .init_irq = sh7372_init_irq, .handle_irq = shmobile_handle_irq_intc, .init_machine = ap4evb_init, - .init_late = sh7372_pm_init_late, + .init_late = shmobile_init_late, .timer = &shmobile_timer, MACHINE_END diff --git a/trunk/arch/arm/mach-shmobile/board-armadillo800eva.c b/trunk/arch/arm/mach-shmobile/board-armadillo800eva.c index da6117c326b9..453a6e50db8b 100644 --- a/trunk/arch/arm/mach-shmobile/board-armadillo800eva.c +++ b/trunk/arch/arm/mach-shmobile/board-armadillo800eva.c @@ -1182,10 +1182,10 @@ static void __init eva_init(void) eva_clock_init(); - rmobile_add_device_to_domain("A4LC", &lcdc0_device); - rmobile_add_device_to_domain("A4LC", &hdmi_lcdc_device); + rmobile_add_device_to_domain(&r8a7740_pd_a4lc, &lcdc0_device); + rmobile_add_device_to_domain(&r8a7740_pd_a4lc, &hdmi_lcdc_device); if (usb) - rmobile_add_device_to_domain("A3SP", usb); + rmobile_add_device_to_domain(&r8a7740_pd_a3sp, usb); } static void __init eva_earlytimer_init(void) diff --git a/trunk/arch/arm/mach-shmobile/board-mackerel.c b/trunk/arch/arm/mach-shmobile/board-mackerel.c index c76776a3e70d..c129542f6aed 100644 --- a/trunk/arch/arm/mach-shmobile/board-mackerel.c +++ b/trunk/arch/arm/mach-shmobile/board-mackerel.c @@ -1410,22 +1410,6 @@ static struct i2c_board_info i2c1_devices[] = { #define USCCR1 0xE6058144 static void __init mackerel_init(void) { - struct pm_domain_device domain_devices[] = { - { "A4LC", &lcdc_device, }, - { "A4LC", &hdmi_lcdc_device, }, - { "A4LC", &meram_device, }, - { "A4MP", &fsi_device, }, - { "A3SP", &usbhs0_device, }, - { "A3SP", &usbhs1_device, }, - { "A3SP", &nand_flash_device, }, - { "A3SP", &sh_mmcif_device, }, - { "A3SP", &sdhi0_device, }, -#if !defined(CONFIG_MMC_SH_MMCIF) && !defined(CONFIG_MMC_SH_MMCIF_MODULE) - { "A3SP", &sdhi1_device, }, -#endif - { "A3SP", &sdhi2_device, }, - { "A4R", &ceu_device, }, - }; u32 srcr4; struct clk *clk; @@ -1640,8 +1624,20 @@ static void __init mackerel_init(void) platform_add_devices(mackerel_devices, ARRAY_SIZE(mackerel_devices)); - rmobile_add_devices_to_domains(domain_devices, - ARRAY_SIZE(domain_devices)); + rmobile_add_device_to_domain(&sh7372_pd_a4lc, &lcdc_device); + rmobile_add_device_to_domain(&sh7372_pd_a4lc, &hdmi_lcdc_device); + rmobile_add_device_to_domain(&sh7372_pd_a4lc, &meram_device); + rmobile_add_device_to_domain(&sh7372_pd_a4mp, &fsi_device); + rmobile_add_device_to_domain(&sh7372_pd_a3sp, &usbhs0_device); + rmobile_add_device_to_domain(&sh7372_pd_a3sp, &usbhs1_device); + rmobile_add_device_to_domain(&sh7372_pd_a3sp, &nand_flash_device); + rmobile_add_device_to_domain(&sh7372_pd_a3sp, &sh_mmcif_device); + rmobile_add_device_to_domain(&sh7372_pd_a3sp, &sdhi0_device); +#if !defined(CONFIG_MMC_SH_MMCIF) && !defined(CONFIG_MMC_SH_MMCIF_MODULE) + rmobile_add_device_to_domain(&sh7372_pd_a3sp, &sdhi1_device); +#endif + rmobile_add_device_to_domain(&sh7372_pd_a3sp, &sdhi2_device); + rmobile_add_device_to_domain(&sh7372_pd_a4r, &ceu_device); hdmi_init_pm_clock(); sh7372_pm_init(); @@ -1655,6 +1651,6 @@ MACHINE_START(MACKEREL, "mackerel") .init_irq = sh7372_init_irq, .handle_irq = shmobile_handle_irq_intc, .init_machine = mackerel_init, - .init_late = sh7372_pm_init_late, + .init_late = shmobile_init_late, .timer = &shmobile_timer, MACHINE_END diff --git a/trunk/arch/arm/mach-shmobile/common.c b/trunk/arch/arm/mach-shmobile/common.c new file mode 100644 index 000000000000..608aba9d60d7 --- /dev/null +++ b/trunk/arch/arm/mach-shmobile/common.c @@ -0,0 +1,24 @@ +/* + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; version 2 of the License. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA + * + */ +#include +#include +#include + +void __init shmobile_init_late(void) +{ + shmobile_suspend_init(); + shmobile_cpuidle_init(); +} diff --git a/trunk/arch/arm/mach-shmobile/cpuidle.c b/trunk/arch/arm/mach-shmobile/cpuidle.c index 9e050268cde4..7b541e911ab4 100644 --- a/trunk/arch/arm/mach-shmobile/cpuidle.c +++ b/trunk/arch/arm/mach-shmobile/cpuidle.c @@ -16,38 +16,51 @@ #include #include -int shmobile_enter_wfi(struct cpuidle_device *dev, struct cpuidle_driver *drv, - int index) +static void shmobile_enter_wfi(void) { cpu_do_idle(); - return 0; +} + +void (*shmobile_cpuidle_modes[CPUIDLE_STATE_MAX])(void) = { + shmobile_enter_wfi, /* regular sleep mode */ +}; + +static int shmobile_cpuidle_enter(struct cpuidle_device *dev, + struct cpuidle_driver *drv, + int index) +{ + shmobile_cpuidle_modes[index](); + + return index; } static struct cpuidle_device shmobile_cpuidle_dev; -static struct cpuidle_driver shmobile_cpuidle_default_driver = { +static struct cpuidle_driver shmobile_cpuidle_driver = { .name = "shmobile_cpuidle", .owner = THIS_MODULE, .en_core_tk_irqen = 1, .states[0] = ARM_CPUIDLE_WFI_STATE, - .states[0].enter = shmobile_enter_wfi, .safe_state_index = 0, /* C1 */ .state_count = 1, }; -static struct cpuidle_driver *cpuidle_drv = &shmobile_cpuidle_default_driver; - -void shmobile_cpuidle_set_driver(struct cpuidle_driver *drv) -{ - cpuidle_drv = drv; -} +void (*shmobile_cpuidle_setup)(struct cpuidle_driver *drv); int shmobile_cpuidle_init(void) { struct cpuidle_device *dev = &shmobile_cpuidle_dev; + struct cpuidle_driver *drv = &shmobile_cpuidle_driver; + int i; + + for (i = 0; i < CPUIDLE_STATE_MAX; i++) + drv->states[i].enter = shmobile_cpuidle_enter; + + if (shmobile_cpuidle_setup) + shmobile_cpuidle_setup(drv); - cpuidle_register_driver(cpuidle_drv); + cpuidle_register_driver(drv); - dev->state_count = cpuidle_drv->state_count; + dev->state_count = drv->state_count; cpuidle_register_device(dev); return 0; diff --git a/trunk/arch/arm/mach-shmobile/include/mach/common.h b/trunk/arch/arm/mach-shmobile/include/mach/common.h index eb89293fff4d..45e61dada030 100644 --- a/trunk/arch/arm/mach-shmobile/include/mach/common.h +++ b/trunk/arch/arm/mach-shmobile/include/mach/common.h @@ -14,10 +14,8 @@ extern int shmobile_clk_init(void); extern void shmobile_handle_irq_intc(struct pt_regs *); extern struct platform_suspend_ops shmobile_suspend_ops; struct cpuidle_driver; -struct cpuidle_device; -extern int shmobile_enter_wfi(struct cpuidle_device *dev, - struct cpuidle_driver *drv, int index); -extern void shmobile_cpuidle_set_driver(struct cpuidle_driver *drv); +extern void (*shmobile_cpuidle_modes[])(void); +extern void (*shmobile_cpuidle_setup)(struct cpuidle_driver *drv); extern void sh7367_init_irq(void); extern void sh7367_map_io(void); @@ -88,6 +86,8 @@ extern int r8a7779_boot_secondary(unsigned int cpu); extern void r8a7779_smp_prepare_cpus(void); extern void r8a7779_register_twd(void); +extern void shmobile_init_late(void); + #ifdef CONFIG_SUSPEND int shmobile_suspend_init(void); #else @@ -100,10 +100,4 @@ int shmobile_cpuidle_init(void); static inline int shmobile_cpuidle_init(void) { return 0; } #endif -static inline void shmobile_init_late(void) -{ - shmobile_suspend_init(); - shmobile_cpuidle_init(); -} - #endif /* __ARCH_MACH_COMMON_H */ diff --git a/trunk/arch/arm/mach-shmobile/include/mach/pm-rmobile.h b/trunk/arch/arm/mach-shmobile/include/mach/pm-rmobile.h index 690553a06887..5a402840fe28 100644 --- a/trunk/arch/arm/mach-shmobile/include/mach/pm-rmobile.h +++ b/trunk/arch/arm/mach-shmobile/include/mach/pm-rmobile.h @@ -12,8 +12,6 @@ #include -#define DEFAULT_DEV_LATENCY_NS 250000 - struct platform_device; struct rmobile_pm_domain { @@ -31,33 +29,16 @@ struct rmobile_pm_domain *to_rmobile_pd(struct generic_pm_domain *d) return container_of(d, struct rmobile_pm_domain, genpd); } -struct pm_domain_device { - const char *domain_name; - struct platform_device *pdev; -}; - #ifdef CONFIG_PM -extern void rmobile_init_domains(struct rmobile_pm_domain domains[], int num); -extern void rmobile_add_device_to_domain_td(const char *domain_name, - struct platform_device *pdev, - struct gpd_timing_data *td); - -static inline void rmobile_add_device_to_domain(const char *domain_name, - struct platform_device *pdev) -{ - rmobile_add_device_to_domain_td(domain_name, pdev, NULL); -} - -extern void rmobile_add_devices_to_domains(struct pm_domain_device data[], - int size); +extern void rmobile_init_pm_domain(struct rmobile_pm_domain *rmobile_pd); +extern void rmobile_add_device_to_domain(struct rmobile_pm_domain *rmobile_pd, + struct platform_device *pdev); +extern void rmobile_pm_add_subdomain(struct rmobile_pm_domain *rmobile_pd, + struct rmobile_pm_domain *rmobile_sd); #else - -#define rmobile_init_domains(domains, num) do { } while (0) -#define rmobile_add_device_to_domain_td(name, pdev, td) do { } while (0) -#define rmobile_add_device_to_domain(name, pdev) do { } while (0) - -static inline void rmobile_add_devices_to_domains(struct pm_domain_device d[], - int size) {} +#define rmobile_init_pm_domain(pd) do { } while (0) +#define rmobile_add_device_to_domain(pd, pdev) do { } while (0) +#define rmobile_pm_add_subdomain(pd, sd) do { } while (0) #endif /* CONFIG_PM */ #endif /* PM_RMOBILE_H */ diff --git a/trunk/arch/arm/mach-shmobile/include/mach/r8a7740.h b/trunk/arch/arm/mach-shmobile/include/mach/r8a7740.h index 59d252f4cf97..7143147780df 100644 --- a/trunk/arch/arm/mach-shmobile/include/mach/r8a7740.h +++ b/trunk/arch/arm/mach-shmobile/include/mach/r8a7740.h @@ -607,9 +607,9 @@ enum { }; #ifdef CONFIG_PM -extern void __init r8a7740_init_pm_domains(void); -#else -static inline void r8a7740_init_pm_domains(void) {} +extern struct rmobile_pm_domain r8a7740_pd_a4s; +extern struct rmobile_pm_domain r8a7740_pd_a3sp; +extern struct rmobile_pm_domain r8a7740_pd_a4lc; #endif /* CONFIG_PM */ #endif /* __ASM_R8A7740_H__ */ diff --git a/trunk/arch/arm/mach-shmobile/include/mach/r8a7779.h b/trunk/arch/arm/mach-shmobile/include/mach/r8a7779.h index 7ad47977d2e7..b07ad318eb2e 100644 --- a/trunk/arch/arm/mach-shmobile/include/mach/r8a7779.h +++ b/trunk/arch/arm/mach-shmobile/include/mach/r8a7779.h @@ -347,9 +347,17 @@ extern int r8a7779_sysc_power_down(struct r8a7779_pm_ch *r8a7779_ch); extern int r8a7779_sysc_power_up(struct r8a7779_pm_ch *r8a7779_ch); #ifdef CONFIG_PM -extern void __init r8a7779_init_pm_domains(void); +extern struct r8a7779_pm_domain r8a7779_sh4a; +extern struct r8a7779_pm_domain r8a7779_sgx; +extern struct r8a7779_pm_domain r8a7779_vdp1; +extern struct r8a7779_pm_domain r8a7779_impx3; + +extern void r8a7779_init_pm_domain(struct r8a7779_pm_domain *r8a7779_pd); +extern void r8a7779_add_device_to_domain(struct r8a7779_pm_domain *r8a7779_pd, + struct platform_device *pdev); #else -static inline void r8a7779_init_pm_domains(void) {} +#define r8a7779_init_pm_domain(pd) do { } while (0) +#define r8a7779_add_device_to_domain(pd, pdev) do { } while (0) #endif /* CONFIG_PM */ #endif /* __ASM_R8A7779_H__ */ diff --git a/trunk/arch/arm/mach-shmobile/include/mach/sh7372.h b/trunk/arch/arm/mach-shmobile/include/mach/sh7372.h index eb98b45c5089..b59048e6d8fd 100644 --- a/trunk/arch/arm/mach-shmobile/include/mach/sh7372.h +++ b/trunk/arch/arm/mach-shmobile/include/mach/sh7372.h @@ -478,17 +478,21 @@ extern struct clk sh7372_fsibck_clk; extern struct clk sh7372_fsidiva_clk; extern struct clk sh7372_fsidivb_clk; +#ifdef CONFIG_PM +extern struct rmobile_pm_domain sh7372_pd_a4lc; +extern struct rmobile_pm_domain sh7372_pd_a4mp; +extern struct rmobile_pm_domain sh7372_pd_d4; +extern struct rmobile_pm_domain sh7372_pd_a4r; +extern struct rmobile_pm_domain sh7372_pd_a3rv; +extern struct rmobile_pm_domain sh7372_pd_a3ri; +extern struct rmobile_pm_domain sh7372_pd_a4s; +extern struct rmobile_pm_domain sh7372_pd_a3sp; +extern struct rmobile_pm_domain sh7372_pd_a3sg; +#endif /* CONFIG_PM */ + extern void sh7372_intcs_suspend(void); extern void sh7372_intcs_resume(void); extern void sh7372_intca_suspend(void); extern void sh7372_intca_resume(void); -#ifdef CONFIG_PM -extern void __init sh7372_init_pm_domains(void); -#else -static inline void sh7372_init_pm_domains(void) {} -#endif - -extern void __init sh7372_pm_init_late(void); - #endif /* __ASM_SH7372_H__ */ diff --git a/trunk/arch/arm/mach-shmobile/pm-r8a7740.c b/trunk/arch/arm/mach-shmobile/pm-r8a7740.c index 21e5316d2d88..893504d012a6 100644 --- a/trunk/arch/arm/mach-shmobile/pm-r8a7740.c +++ b/trunk/arch/arm/mach-shmobile/pm-r8a7740.c @@ -21,6 +21,14 @@ static int r8a7740_pd_a4s_suspend(void) return -EBUSY; } +struct rmobile_pm_domain r8a7740_pd_a4s = { + .genpd.name = "A4S", + .bit_shift = 10, + .gov = &pm_domain_always_on_gov, + .no_debug = true, + .suspend = r8a7740_pd_a4s_suspend, +}; + static int r8a7740_pd_a3sp_suspend(void) { /* @@ -30,31 +38,17 @@ static int r8a7740_pd_a3sp_suspend(void) return console_suspend_enabled ? 0 : -EBUSY; } -static struct rmobile_pm_domain r8a7740_pm_domains[] = { - { - .genpd.name = "A4S", - .bit_shift = 10, - .gov = &pm_domain_always_on_gov, - .no_debug = true, - .suspend = r8a7740_pd_a4s_suspend, - }, - { - .genpd.name = "A3SP", - .bit_shift = 11, - .gov = &pm_domain_always_on_gov, - .no_debug = true, - .suspend = r8a7740_pd_a3sp_suspend, - }, - { - .genpd.name = "A4LC", - .bit_shift = 1, - }, +struct rmobile_pm_domain r8a7740_pd_a3sp = { + .genpd.name = "A3SP", + .bit_shift = 11, + .gov = &pm_domain_always_on_gov, + .no_debug = true, + .suspend = r8a7740_pd_a3sp_suspend, }; -void __init r8a7740_init_pm_domains(void) -{ - rmobile_init_domains(r8a7740_pm_domains, ARRAY_SIZE(r8a7740_pm_domains)); - pm_genpd_add_subdomain_names("A4S", "A3SP"); -} +struct rmobile_pm_domain r8a7740_pd_a4lc = { + .genpd.name = "A4LC", + .bit_shift = 1, +}; #endif /* CONFIG_PM */ diff --git a/trunk/arch/arm/mach-shmobile/pm-r8a7779.c b/trunk/arch/arm/mach-shmobile/pm-r8a7779.c index d50a8e9b94a4..a18a4ae16d2b 100644 --- a/trunk/arch/arm/mach-shmobile/pm-r8a7779.c +++ b/trunk/arch/arm/mach-shmobile/pm-r8a7779.c @@ -183,7 +183,7 @@ static bool pd_active_wakeup(struct device *dev) return true; } -static void r8a7779_init_pm_domain(struct r8a7779_pm_domain *r8a7779_pd) +void r8a7779_init_pm_domain(struct r8a7779_pm_domain *r8a7779_pd) { struct generic_pm_domain *genpd = &r8a7779_pd->genpd; @@ -199,45 +199,44 @@ static void r8a7779_init_pm_domain(struct r8a7779_pm_domain *r8a7779_pd) pd_power_up(&r8a7779_pd->genpd); } -static struct r8a7779_pm_domain r8a7779_pm_domains[] = { - { - .genpd.name = "SH4A", - .ch = { - .chan_offs = 0x80, /* PWRSR1 .. PWRER1 */ - .isr_bit = 16, /* SH4A */ - }, - }, - { - .genpd.name = "SGX", - .ch = { - .chan_offs = 0xc0, /* PWRSR2 .. PWRER2 */ - .isr_bit = 20, /* SGX */ - }, - }, - { - .genpd.name = "VDP1", - .ch = { - .chan_offs = 0x100, /* PWRSR3 .. PWRER3 */ - .isr_bit = 21, /* VDP */ - }, - }, - { - .genpd.name = "IMPX3", - .ch = { - .chan_offs = 0x140, /* PWRSR4 .. PWRER4 */ - .isr_bit = 24, /* IMP */ - }, - }, -}; - -void __init r8a7779_init_pm_domains(void) +void r8a7779_add_device_to_domain(struct r8a7779_pm_domain *r8a7779_pd, + struct platform_device *pdev) { - int j; + struct device *dev = &pdev->dev; - for (j = 0; j < ARRAY_SIZE(r8a7779_pm_domains); j++) - r8a7779_init_pm_domain(&r8a7779_pm_domains[j]); + pm_genpd_add_device(&r8a7779_pd->genpd, dev); + if (pm_clk_no_clocks(dev)) + pm_clk_add(dev, NULL); } +struct r8a7779_pm_domain r8a7779_sh4a = { + .ch = { + .chan_offs = 0x80, /* PWRSR1 .. PWRER1 */ + .isr_bit = 16, /* SH4A */ + } +}; + +struct r8a7779_pm_domain r8a7779_sgx = { + .ch = { + .chan_offs = 0xc0, /* PWRSR2 .. PWRER2 */ + .isr_bit = 20, /* SGX */ + } +}; + +struct r8a7779_pm_domain r8a7779_vdp1 = { + .ch = { + .chan_offs = 0x100, /* PWRSR3 .. PWRER3 */ + .isr_bit = 21, /* VDP */ + } +}; + +struct r8a7779_pm_domain r8a7779_impx3 = { + .ch = { + .chan_offs = 0x140, /* PWRSR4 .. PWRER4 */ + .isr_bit = 24, /* IMP */ + } +}; + #endif /* CONFIG_PM */ void __init r8a7779_pm_init(void) diff --git a/trunk/arch/arm/mach-shmobile/pm-rmobile.c b/trunk/arch/arm/mach-shmobile/pm-rmobile.c index d37d368434da..a8562540f1d6 100644 --- a/trunk/arch/arm/mach-shmobile/pm-rmobile.c +++ b/trunk/arch/arm/mach-shmobile/pm-rmobile.c @@ -134,7 +134,7 @@ static int rmobile_pd_start_dev(struct device *dev) return ret; } -static void rmobile_init_pm_domain(struct rmobile_pm_domain *rmobile_pd) +void rmobile_init_pm_domain(struct rmobile_pm_domain *rmobile_pd) { struct generic_pm_domain *genpd = &rmobile_pd->genpd; struct dev_power_governor *gov = rmobile_pd->gov; @@ -149,38 +149,19 @@ static void rmobile_init_pm_domain(struct rmobile_pm_domain *rmobile_pd) __rmobile_pd_power_up(rmobile_pd, false); } -void rmobile_init_domains(struct rmobile_pm_domain domains[], int num) -{ - int j; - - for (j = 0; j < num; j++) - rmobile_init_pm_domain(&domains[j]); -} - -void rmobile_add_device_to_domain_td(const char *domain_name, - struct platform_device *pdev, - struct gpd_timing_data *td) +void rmobile_add_device_to_domain(struct rmobile_pm_domain *rmobile_pd, + struct platform_device *pdev) { struct device *dev = &pdev->dev; - __pm_genpd_name_add_device(domain_name, dev, td); + pm_genpd_add_device(&rmobile_pd->genpd, dev); if (pm_clk_no_clocks(dev)) pm_clk_add(dev, NULL); } -void rmobile_add_devices_to_domains(struct pm_domain_device data[], - int size) +void rmobile_pm_add_subdomain(struct rmobile_pm_domain *rmobile_pd, + struct rmobile_pm_domain *rmobile_sd) { - struct gpd_timing_data latencies = { - .stop_latency_ns = DEFAULT_DEV_LATENCY_NS, - .start_latency_ns = DEFAULT_DEV_LATENCY_NS, - .save_state_latency_ns = DEFAULT_DEV_LATENCY_NS, - .restore_state_latency_ns = DEFAULT_DEV_LATENCY_NS, - }; - int j; - - for (j = 0; j < size; j++) - rmobile_add_device_to_domain_td(data[j].domain_name, - data[j].pdev, &latencies); + pm_genpd_add_subdomain(&rmobile_pd->genpd, &rmobile_sd->genpd); } #endif /* CONFIG_PM */ diff --git a/trunk/arch/arm/mach-shmobile/pm-sh7372.c b/trunk/arch/arm/mach-shmobile/pm-sh7372.c index a7a5e20ae9a0..792037069226 100644 --- a/trunk/arch/arm/mach-shmobile/pm-sh7372.c +++ b/trunk/arch/arm/mach-shmobile/pm-sh7372.c @@ -21,7 +21,6 @@ #include #include #include -#include #include #include #include @@ -72,7 +71,20 @@ #ifdef CONFIG_PM -#define PM_DOMAIN_ON_OFF_LATENCY_NS 250000 +struct rmobile_pm_domain sh7372_pd_a4lc = { + .genpd.name = "A4LC", + .bit_shift = 1, +}; + +struct rmobile_pm_domain sh7372_pd_a4mp = { + .genpd.name = "A4MP", + .bit_shift = 2, +}; + +struct rmobile_pm_domain sh7372_pd_d4 = { + .genpd.name = "D4", + .bit_shift = 3, +}; static int sh7372_a4r_pd_suspend(void) { @@ -81,25 +93,39 @@ static int sh7372_a4r_pd_suspend(void) return 0; } -static bool a4s_suspend_ready; +struct rmobile_pm_domain sh7372_pd_a4r = { + .genpd.name = "A4R", + .bit_shift = 5, + .suspend = sh7372_a4r_pd_suspend, + .resume = sh7372_intcs_resume, +}; -static int sh7372_a4s_pd_suspend(void) +struct rmobile_pm_domain sh7372_pd_a3rv = { + .genpd.name = "A3RV", + .bit_shift = 6, +}; + +struct rmobile_pm_domain sh7372_pd_a3ri = { + .genpd.name = "A3RI", + .bit_shift = 8, +}; + +static int sh7372_pd_a4s_suspend(void) { /* * The A4S domain contains the CPU core and therefore it should - * only be turned off if the CPU is not in use. This may happen - * during system suspend, when SYSC is going to be used for generating - * resume signals and a4s_suspend_ready is set to let - * sh7372_enter_suspend() know that it can turn A4S off. + * only be turned off if the CPU is in use. */ - a4s_suspend_ready = true; return -EBUSY; } -static void sh7372_a4s_pd_resume(void) -{ - a4s_suspend_ready = false; -} +struct rmobile_pm_domain sh7372_pd_a4s = { + .genpd.name = "A4S", + .bit_shift = 10, + .gov = &pm_domain_always_on_gov, + .no_debug = true, + .suspend = sh7372_pd_a4s_suspend, +}; static int sh7372_a3sp_pd_suspend(void) { @@ -110,80 +136,18 @@ static int sh7372_a3sp_pd_suspend(void) return console_suspend_enabled ? 0 : -EBUSY; } -static struct rmobile_pm_domain sh7372_pm_domains[] = { - { - .genpd.name = "A4LC", - .genpd.power_on_latency_ns = PM_DOMAIN_ON_OFF_LATENCY_NS, - .genpd.power_off_latency_ns = PM_DOMAIN_ON_OFF_LATENCY_NS, - .bit_shift = 1, - }, - { - .genpd.name = "A4MP", - .genpd.power_on_latency_ns = PM_DOMAIN_ON_OFF_LATENCY_NS, - .genpd.power_off_latency_ns = PM_DOMAIN_ON_OFF_LATENCY_NS, - .bit_shift = 2, - }, - { - .genpd.name = "D4", - .genpd.power_on_latency_ns = PM_DOMAIN_ON_OFF_LATENCY_NS, - .genpd.power_off_latency_ns = PM_DOMAIN_ON_OFF_LATENCY_NS, - .bit_shift = 3, - }, - { - .genpd.name = "A4R", - .genpd.power_on_latency_ns = PM_DOMAIN_ON_OFF_LATENCY_NS, - .genpd.power_off_latency_ns = PM_DOMAIN_ON_OFF_LATENCY_NS, - .bit_shift = 5, - .suspend = sh7372_a4r_pd_suspend, - .resume = sh7372_intcs_resume, - }, - { - .genpd.name = "A3RV", - .genpd.power_on_latency_ns = PM_DOMAIN_ON_OFF_LATENCY_NS, - .genpd.power_off_latency_ns = PM_DOMAIN_ON_OFF_LATENCY_NS, - .bit_shift = 6, - }, - { - .genpd.name = "A3RI", - .genpd.power_on_latency_ns = PM_DOMAIN_ON_OFF_LATENCY_NS, - .genpd.power_off_latency_ns = PM_DOMAIN_ON_OFF_LATENCY_NS, - .bit_shift = 8, - }, - { - .genpd.name = "A4S", - .genpd.power_on_latency_ns = PM_DOMAIN_ON_OFF_LATENCY_NS, - .genpd.power_off_latency_ns = PM_DOMAIN_ON_OFF_LATENCY_NS, - .bit_shift = 10, - .gov = &pm_domain_always_on_gov, - .no_debug = true, - .suspend = sh7372_a4s_pd_suspend, - .resume = sh7372_a4s_pd_resume, - }, - { - .genpd.name = "A3SP", - .genpd.power_on_latency_ns = PM_DOMAIN_ON_OFF_LATENCY_NS, - .genpd.power_off_latency_ns = PM_DOMAIN_ON_OFF_LATENCY_NS, - .bit_shift = 11, - .gov = &pm_domain_always_on_gov, - .no_debug = true, - .suspend = sh7372_a3sp_pd_suspend, - }, - { - .genpd.name = "A3SG", - .genpd.power_on_latency_ns = PM_DOMAIN_ON_OFF_LATENCY_NS, - .genpd.power_off_latency_ns = PM_DOMAIN_ON_OFF_LATENCY_NS, - .bit_shift = 13, - }, +struct rmobile_pm_domain sh7372_pd_a3sp = { + .genpd.name = "A3SP", + .bit_shift = 11, + .gov = &pm_domain_always_on_gov, + .no_debug = true, + .suspend = sh7372_a3sp_pd_suspend, }; -void __init sh7372_init_pm_domains(void) -{ - rmobile_init_domains(sh7372_pm_domains, ARRAY_SIZE(sh7372_pm_domains)); - pm_genpd_add_subdomain_names("A4LC", "A3RV"); - pm_genpd_add_subdomain_names("A4R", "A4LC"); - pm_genpd_add_subdomain_names("A4S", "A3SG"); - pm_genpd_add_subdomain_names("A4S", "A3SP"); -} +struct rmobile_pm_domain sh7372_pd_a3sg = { + .genpd.name = "A3SG", + .bit_shift = 13, +}; #endif /* CONFIG_PM */ @@ -339,21 +303,6 @@ static void sh7372_enter_a3sm_common(int pllc0_on) sh7372_set_reset_vector(__pa(sh7372_resume_core_standby_sysc)); sh7372_enter_sysc(pllc0_on, 1 << 12); } - -static void sh7372_enter_a4s_common(int pllc0_on) -{ - sh7372_intca_suspend(); - sh7372_set_reset_vector(SMFRAM); - sh7372_enter_sysc(pllc0_on, 1 << 10); - sh7372_intca_resume(); -} - -static void sh7372_pm_setup_smfram(void) -{ - memcpy((void *)SMFRAM, sh7372_resume_core_standby_sysc, 0x100); -} -#else -static inline void sh7372_pm_setup_smfram(void) {} #endif /* CONFIG_SUSPEND || CONFIG_CPU_IDLE */ #ifdef CONFIG_CPU_IDLE @@ -363,8 +312,7 @@ static int sh7372_do_idle_core_standby(unsigned long unused) return 0; } -static int sh7372_enter_core_standby(struct cpuidle_device *dev, - struct cpuidle_driver *drv, int index) +static void sh7372_enter_core_standby(void) { sh7372_set_reset_vector(__pa(sh7372_resume_core_standby_sysc)); @@ -375,102 +323,83 @@ static int sh7372_enter_core_standby(struct cpuidle_device *dev, /* disable reset vector translation */ __raw_writel(0, SBAR); - - return 1; } -static int sh7372_enter_a3sm_pll_on(struct cpuidle_device *dev, - struct cpuidle_driver *drv, int index) +static void sh7372_enter_a3sm_pll_on(void) { sh7372_enter_a3sm_common(1); - return 2; } -static int sh7372_enter_a3sm_pll_off(struct cpuidle_device *dev, - struct cpuidle_driver *drv, int index) +static void sh7372_enter_a3sm_pll_off(void) { sh7372_enter_a3sm_common(0); - return 3; } -static int sh7372_enter_a4s(struct cpuidle_device *dev, - struct cpuidle_driver *drv, int index) +static void sh7372_cpuidle_setup(struct cpuidle_driver *drv) { - unsigned long msk, msk2; - - if (!sh7372_sysc_valid(&msk, &msk2)) - return sh7372_enter_a3sm_pll_off(dev, drv, index); - - sh7372_setup_sysc(msk, msk2); - sh7372_enter_a4s_common(0); - return 4; + struct cpuidle_state *state = &drv->states[drv->state_count]; + + snprintf(state->name, CPUIDLE_NAME_LEN, "C2"); + strncpy(state->desc, "Core Standby Mode", CPUIDLE_DESC_LEN); + state->exit_latency = 10; + state->target_residency = 20 + 10; + state->flags = CPUIDLE_FLAG_TIME_VALID; + shmobile_cpuidle_modes[drv->state_count] = sh7372_enter_core_standby; + drv->state_count++; + + state = &drv->states[drv->state_count]; + snprintf(state->name, CPUIDLE_NAME_LEN, "C3"); + strncpy(state->desc, "A3SM PLL ON", CPUIDLE_DESC_LEN); + state->exit_latency = 20; + state->target_residency = 30 + 20; + state->flags = CPUIDLE_FLAG_TIME_VALID; + shmobile_cpuidle_modes[drv->state_count] = sh7372_enter_a3sm_pll_on; + drv->state_count++; + + state = &drv->states[drv->state_count]; + snprintf(state->name, CPUIDLE_NAME_LEN, "C4"); + strncpy(state->desc, "A3SM PLL OFF", CPUIDLE_DESC_LEN); + state->exit_latency = 120; + state->target_residency = 30 + 120; + state->flags = CPUIDLE_FLAG_TIME_VALID; + shmobile_cpuidle_modes[drv->state_count] = sh7372_enter_a3sm_pll_off; + drv->state_count++; } -static struct cpuidle_driver sh7372_cpuidle_driver = { - .name = "sh7372_cpuidle", - .owner = THIS_MODULE, - .en_core_tk_irqen = 1, - .state_count = 5, - .safe_state_index = 0, /* C1 */ - .states[0] = ARM_CPUIDLE_WFI_STATE, - .states[0].enter = shmobile_enter_wfi, - .states[1] = { - .name = "C2", - .desc = "Core Standby Mode", - .exit_latency = 10, - .target_residency = 20 + 10, - .flags = CPUIDLE_FLAG_TIME_VALID, - .enter = sh7372_enter_core_standby, - }, - .states[2] = { - .name = "C3", - .desc = "A3SM PLL ON", - .exit_latency = 20, - .target_residency = 30 + 20, - .flags = CPUIDLE_FLAG_TIME_VALID, - .enter = sh7372_enter_a3sm_pll_on, - }, - .states[3] = { - .name = "C4", - .desc = "A3SM PLL OFF", - .exit_latency = 120, - .target_residency = 30 + 120, - .flags = CPUIDLE_FLAG_TIME_VALID, - .enter = sh7372_enter_a3sm_pll_off, - }, - .states[4] = { - .name = "C5", - .desc = "A4S PLL OFF", - .exit_latency = 240, - .target_residency = 30 + 240, - .flags = CPUIDLE_FLAG_TIME_VALID, - .enter = sh7372_enter_a4s, - .disabled = true, - }, -}; - static void sh7372_cpuidle_init(void) { - shmobile_cpuidle_set_driver(&sh7372_cpuidle_driver); + shmobile_cpuidle_setup = sh7372_cpuidle_setup; } #else static void sh7372_cpuidle_init(void) {} #endif #ifdef CONFIG_SUSPEND +static void sh7372_enter_a4s_common(int pllc0_on) +{ + sh7372_intca_suspend(); + memcpy((void *)SMFRAM, sh7372_resume_core_standby_sysc, 0x100); + sh7372_set_reset_vector(SMFRAM); + sh7372_enter_sysc(pllc0_on, 1 << 10); + sh7372_intca_resume(); +} + static int sh7372_enter_suspend(suspend_state_t suspend_state) { unsigned long msk, msk2; /* check active clocks to determine potential wakeup sources */ - if (sh7372_sysc_valid(&msk, &msk2) && a4s_suspend_ready) { - /* convert INTC mask/sense to SYSC mask/sense */ - sh7372_setup_sysc(msk, msk2); - - /* enter A4S sleep with PLLC0 off */ - pr_debug("entering A4S\n"); - sh7372_enter_a4s_common(0); - return 0; + if (sh7372_sysc_valid(&msk, &msk2)) { + if (!console_suspend_enabled && + sh7372_pd_a4s.genpd.status == GPD_STATE_POWER_OFF) { + /* convert INTC mask/sense to SYSC mask/sense */ + sh7372_setup_sysc(msk, msk2); + + /* enter A4S sleep with PLLC0 off */ + pr_debug("entering A4S\n"); + sh7372_enter_a4s_common(0); + return 0; + } } /* default to enter A3SM sleep with PLLC0 off */ @@ -496,7 +425,7 @@ static int sh7372_pm_notifier_fn(struct notifier_block *notifier, * executed during system suspend and resume, respectively, so * that those functions don't crash while accessing the INTCS. */ - pm_genpd_name_poweron("A4R"); + pm_genpd_poweron(&sh7372_pd_a4r.genpd); break; case PM_POST_SUSPEND: pm_genpd_poweroff_unused(); @@ -525,14 +454,6 @@ void __init sh7372_pm_init(void) /* do not convert A3SM, A3SP, A3SG, A4R power down into A4S */ __raw_writel(0, PDNSEL); - sh7372_pm_setup_smfram(); - sh7372_suspend_init(); sh7372_cpuidle_init(); } - -void __init sh7372_pm_init_late(void) -{ - shmobile_init_late(); - pm_genpd_name_attach_cpuidle("A4S", 4); -} diff --git a/trunk/arch/arm/mach-shmobile/setup-r8a7740.c b/trunk/arch/arm/mach-shmobile/setup-r8a7740.c index 11bb1d984197..78948a9dba0e 100644 --- a/trunk/arch/arm/mach-shmobile/setup-r8a7740.c +++ b/trunk/arch/arm/mach-shmobile/setup-r8a7740.c @@ -673,7 +673,12 @@ void __init r8a7740_add_standard_devices(void) r8a7740_i2c_workaround(&i2c0_device); r8a7740_i2c_workaround(&i2c1_device); - r8a7740_init_pm_domains(); + /* PM domain */ + rmobile_init_pm_domain(&r8a7740_pd_a4s); + rmobile_init_pm_domain(&r8a7740_pd_a3sp); + rmobile_init_pm_domain(&r8a7740_pd_a4lc); + + rmobile_pm_add_subdomain(&r8a7740_pd_a4s, &r8a7740_pd_a3sp); /* add devices */ platform_add_devices(r8a7740_early_devices, @@ -683,16 +688,16 @@ void __init r8a7740_add_standard_devices(void) /* add devices to PM domain */ - rmobile_add_device_to_domain("A3SP", &scif0_device); - rmobile_add_device_to_domain("A3SP", &scif1_device); - rmobile_add_device_to_domain("A3SP", &scif2_device); - rmobile_add_device_to_domain("A3SP", &scif3_device); - rmobile_add_device_to_domain("A3SP", &scif4_device); - rmobile_add_device_to_domain("A3SP", &scif5_device); - rmobile_add_device_to_domain("A3SP", &scif6_device); - rmobile_add_device_to_domain("A3SP", &scif7_device); - rmobile_add_device_to_domain("A3SP", &scifb_device); - rmobile_add_device_to_domain("A3SP", &i2c1_device); + rmobile_add_device_to_domain(&r8a7740_pd_a3sp, &scif0_device); + rmobile_add_device_to_domain(&r8a7740_pd_a3sp, &scif1_device); + rmobile_add_device_to_domain(&r8a7740_pd_a3sp, &scif2_device); + rmobile_add_device_to_domain(&r8a7740_pd_a3sp, &scif3_device); + rmobile_add_device_to_domain(&r8a7740_pd_a3sp, &scif4_device); + rmobile_add_device_to_domain(&r8a7740_pd_a3sp, &scif5_device); + rmobile_add_device_to_domain(&r8a7740_pd_a3sp, &scif6_device); + rmobile_add_device_to_domain(&r8a7740_pd_a3sp, &scif7_device); + rmobile_add_device_to_domain(&r8a7740_pd_a3sp, &scifb_device); + rmobile_add_device_to_domain(&r8a7740_pd_a3sp, &i2c1_device); } static void __init r8a7740_earlytimer_init(void) diff --git a/trunk/arch/arm/mach-shmobile/setup-r8a7779.c b/trunk/arch/arm/mach-shmobile/setup-r8a7779.c index 2917668f0091..e98e46f6cf55 100644 --- a/trunk/arch/arm/mach-shmobile/setup-r8a7779.c +++ b/trunk/arch/arm/mach-shmobile/setup-r8a7779.c @@ -251,7 +251,10 @@ void __init r8a7779_add_standard_devices(void) #endif r8a7779_pm_init(); - r8a7779_init_pm_domains(); + r8a7779_init_pm_domain(&r8a7779_sh4a); + r8a7779_init_pm_domain(&r8a7779_sgx); + r8a7779_init_pm_domain(&r8a7779_vdp1); + r8a7779_init_pm_domain(&r8a7779_impx3); platform_add_devices(r8a7779_early_devices, ARRAY_SIZE(r8a7779_early_devices)); diff --git a/trunk/arch/arm/mach-shmobile/setup-sh7372.c b/trunk/arch/arm/mach-shmobile/setup-sh7372.c index a07954fbcd22..838a87be1d5c 100644 --- a/trunk/arch/arm/mach-shmobile/setup-sh7372.c +++ b/trunk/arch/arm/mach-shmobile/setup-sh7372.c @@ -1001,34 +1001,21 @@ static struct platform_device *sh7372_late_devices[] __initdata = { void __init sh7372_add_standard_devices(void) { - struct pm_domain_device domain_devices[] = { - { "A3RV", &vpu_device, }, - { "A4MP", &spu0_device, }, - { "A4MP", &spu1_device, }, - { "A3SP", &scif0_device, }, - { "A3SP", &scif1_device, }, - { "A3SP", &scif2_device, }, - { "A3SP", &scif3_device, }, - { "A3SP", &scif4_device, }, - { "A3SP", &scif5_device, }, - { "A3SP", &scif6_device, }, - { "A3SP", &iic1_device, }, - { "A3SP", &dma0_device, }, - { "A3SP", &dma1_device, }, - { "A3SP", &dma2_device, }, - { "A3SP", &usb_dma0_device, }, - { "A3SP", &usb_dma1_device, }, - { "A4R", &iic0_device, }, - { "A4R", &veu0_device, }, - { "A4R", &veu1_device, }, - { "A4R", &veu2_device, }, - { "A4R", &veu3_device, }, - { "A4R", &jpu_device, }, - { "A4R", &tmu00_device, }, - { "A4R", &tmu01_device, }, - }; - - sh7372_init_pm_domains(); + rmobile_init_pm_domain(&sh7372_pd_a4lc); + rmobile_init_pm_domain(&sh7372_pd_a4mp); + rmobile_init_pm_domain(&sh7372_pd_d4); + rmobile_init_pm_domain(&sh7372_pd_a4r); + rmobile_init_pm_domain(&sh7372_pd_a3rv); + rmobile_init_pm_domain(&sh7372_pd_a3ri); + rmobile_init_pm_domain(&sh7372_pd_a4s); + rmobile_init_pm_domain(&sh7372_pd_a3sp); + rmobile_init_pm_domain(&sh7372_pd_a3sg); + + rmobile_pm_add_subdomain(&sh7372_pd_a4lc, &sh7372_pd_a3rv); + rmobile_pm_add_subdomain(&sh7372_pd_a4r, &sh7372_pd_a4lc); + + rmobile_pm_add_subdomain(&sh7372_pd_a4s, &sh7372_pd_a3sg); + rmobile_pm_add_subdomain(&sh7372_pd_a4s, &sh7372_pd_a3sp); platform_add_devices(sh7372_early_devices, ARRAY_SIZE(sh7372_early_devices)); @@ -1036,8 +1023,30 @@ void __init sh7372_add_standard_devices(void) platform_add_devices(sh7372_late_devices, ARRAY_SIZE(sh7372_late_devices)); - rmobile_add_devices_to_domains(domain_devices, - ARRAY_SIZE(domain_devices)); + rmobile_add_device_to_domain(&sh7372_pd_a3rv, &vpu_device); + rmobile_add_device_to_domain(&sh7372_pd_a4mp, &spu0_device); + rmobile_add_device_to_domain(&sh7372_pd_a4mp, &spu1_device); + rmobile_add_device_to_domain(&sh7372_pd_a3sp, &scif0_device); + rmobile_add_device_to_domain(&sh7372_pd_a3sp, &scif1_device); + rmobile_add_device_to_domain(&sh7372_pd_a3sp, &scif2_device); + rmobile_add_device_to_domain(&sh7372_pd_a3sp, &scif3_device); + rmobile_add_device_to_domain(&sh7372_pd_a3sp, &scif4_device); + rmobile_add_device_to_domain(&sh7372_pd_a3sp, &scif5_device); + rmobile_add_device_to_domain(&sh7372_pd_a3sp, &scif6_device); + rmobile_add_device_to_domain(&sh7372_pd_a3sp, &iic1_device); + rmobile_add_device_to_domain(&sh7372_pd_a3sp, &dma0_device); + rmobile_add_device_to_domain(&sh7372_pd_a3sp, &dma1_device); + rmobile_add_device_to_domain(&sh7372_pd_a3sp, &dma2_device); + rmobile_add_device_to_domain(&sh7372_pd_a3sp, &usb_dma0_device); + rmobile_add_device_to_domain(&sh7372_pd_a3sp, &usb_dma1_device); + rmobile_add_device_to_domain(&sh7372_pd_a4r, &iic0_device); + rmobile_add_device_to_domain(&sh7372_pd_a4r, &veu0_device); + rmobile_add_device_to_domain(&sh7372_pd_a4r, &veu1_device); + rmobile_add_device_to_domain(&sh7372_pd_a4r, &veu2_device); + rmobile_add_device_to_domain(&sh7372_pd_a4r, &veu3_device); + rmobile_add_device_to_domain(&sh7372_pd_a4r, &jpu_device); + rmobile_add_device_to_domain(&sh7372_pd_a4r, &tmu00_device); + rmobile_add_device_to_domain(&sh7372_pd_a4r, &tmu01_device); } static void __init sh7372_earlytimer_init(void) diff --git a/trunk/arch/x86/include/asm/msr-index.h b/trunk/arch/x86/include/asm/msr-index.h index fbee9714d9ab..957ec87385af 100644 --- a/trunk/arch/x86/include/asm/msr-index.h +++ b/trunk/arch/x86/include/asm/msr-index.h @@ -248,9 +248,6 @@ #define MSR_IA32_PERF_STATUS 0x00000198 #define MSR_IA32_PERF_CTL 0x00000199 -#define MSR_AMD_PSTATE_DEF_BASE 0xc0010064 -#define MSR_AMD_PERF_STATUS 0xc0010063 -#define MSR_AMD_PERF_CTL 0xc0010062 #define MSR_IA32_MPERF 0x000000e7 #define MSR_IA32_APERF 0x000000e8 diff --git a/trunk/drivers/acpi/processor_driver.c b/trunk/drivers/acpi/processor_driver.c index e78c2a52ea46..bfc31cb0dd3e 100644 --- a/trunk/drivers/acpi/processor_driver.c +++ b/trunk/drivers/acpi/processor_driver.c @@ -475,7 +475,7 @@ static __ref int acpi_processor_start(struct acpi_processor *pr) acpi_processor_get_limit_info(pr); if (!cpuidle_get_driver() || cpuidle_get_driver() == &acpi_idle_driver) - acpi_processor_power_init(pr); + acpi_processor_power_init(pr, device); pr->cdev = thermal_cooling_device_register("Processor", device, &processor_cooling_ops); @@ -509,7 +509,7 @@ static __ref int acpi_processor_start(struct acpi_processor *pr) err_thermal_unregister: thermal_cooling_device_unregister(pr->cdev); err_power_exit: - acpi_processor_power_exit(pr); + acpi_processor_power_exit(pr, device); return result; } @@ -620,7 +620,7 @@ static int acpi_processor_remove(struct acpi_device *device, int type) return -EINVAL; } - acpi_processor_power_exit(pr); + acpi_processor_power_exit(pr, device); sysfs_remove_link(&device->dev.kobj, "sysdev"); @@ -905,6 +905,8 @@ static int __init acpi_processor_init(void) if (acpi_disabled) return 0; + memset(&errata, 0, sizeof(errata)); + result = acpi_bus_register_driver(&acpi_processor_driver); if (result < 0) return result; diff --git a/trunk/drivers/acpi/processor_idle.c b/trunk/drivers/acpi/processor_idle.c index 3655ab923812..ad3730b4038b 100644 --- a/trunk/drivers/acpi/processor_idle.c +++ b/trunk/drivers/acpi/processor_idle.c @@ -79,8 +79,6 @@ module_param(bm_check_disable, uint, 0000); static unsigned int latency_factor __read_mostly = 2; module_param(latency_factor, uint, 0644); -static DEFINE_PER_CPU(struct cpuidle_device *, acpi_cpuidle_device); - static int disabled_by_idle_boot_param(void) { return boot_option_idle_override == IDLE_POLL || @@ -485,6 +483,8 @@ static int acpi_processor_get_power_info_cst(struct acpi_processor *pr) if (obj->type != ACPI_TYPE_INTEGER) continue; + cx.power = obj->integer.value; + current_count++; memcpy(&(pr->power.states[current_count]), &cx, sizeof(cx)); @@ -1000,7 +1000,7 @@ static int acpi_processor_setup_cpuidle_cx(struct acpi_processor *pr) int i, count = CPUIDLE_DRIVER_STATE_START; struct acpi_processor_cx *cx; struct cpuidle_state_usage *state_usage; - struct cpuidle_device *dev = per_cpu(acpi_cpuidle_device, pr->id); + struct cpuidle_device *dev = &pr->power.dev; if (!pr->flags.power_setup_done) return -EINVAL; @@ -1132,7 +1132,6 @@ static int acpi_processor_setup_cpuidle_states(struct acpi_processor *pr) int acpi_processor_hotplug(struct acpi_processor *pr) { int ret = 0; - struct cpuidle_device *dev = per_cpu(acpi_cpuidle_device, pr->id); if (disabled_by_idle_boot_param()) return 0; @@ -1148,11 +1147,11 @@ int acpi_processor_hotplug(struct acpi_processor *pr) return -ENODEV; cpuidle_pause_and_lock(); - cpuidle_disable_device(dev); + cpuidle_disable_device(&pr->power.dev); acpi_processor_get_power_info(pr); if (pr->flags.power) { acpi_processor_setup_cpuidle_cx(pr); - ret = cpuidle_enable_device(dev); + ret = cpuidle_enable_device(&pr->power.dev); } cpuidle_resume_and_unlock(); @@ -1163,7 +1162,6 @@ int acpi_processor_cst_has_changed(struct acpi_processor *pr) { int cpu; struct acpi_processor *_pr; - struct cpuidle_device *dev; if (disabled_by_idle_boot_param()) return 0; @@ -1194,8 +1192,7 @@ int acpi_processor_cst_has_changed(struct acpi_processor *pr) _pr = per_cpu(processors, cpu); if (!_pr || !_pr->flags.power_setup_done) continue; - dev = per_cpu(acpi_cpuidle_device, cpu); - cpuidle_disable_device(dev); + cpuidle_disable_device(&_pr->power.dev); } /* Populate Updated C-state information */ @@ -1209,8 +1206,7 @@ int acpi_processor_cst_has_changed(struct acpi_processor *pr) acpi_processor_get_power_info(_pr); if (_pr->flags.power) { acpi_processor_setup_cpuidle_cx(_pr); - dev = per_cpu(acpi_cpuidle_device, cpu); - cpuidle_enable_device(dev); + cpuidle_enable_device(&_pr->power.dev); } } put_online_cpus(); @@ -1222,11 +1218,11 @@ int acpi_processor_cst_has_changed(struct acpi_processor *pr) static int acpi_processor_registered; -int __cpuinit acpi_processor_power_init(struct acpi_processor *pr) +int __cpuinit acpi_processor_power_init(struct acpi_processor *pr, + struct acpi_device *device) { acpi_status status = 0; int retval; - struct cpuidle_device *dev; static int first_run; if (disabled_by_idle_boot_param()) @@ -1272,18 +1268,11 @@ int __cpuinit acpi_processor_power_init(struct acpi_processor *pr) printk(KERN_DEBUG "ACPI: %s registered with cpuidle\n", acpi_idle_driver.name); } - - dev = kzalloc(sizeof(*dev), GFP_KERNEL); - if (!dev) - return -ENOMEM; - per_cpu(acpi_cpuidle_device, pr->id) = dev; - - acpi_processor_setup_cpuidle_cx(pr); - /* Register per-cpu cpuidle_device. Cpuidle driver * must already be registered before registering device */ - retval = cpuidle_register_device(dev); + acpi_processor_setup_cpuidle_cx(pr); + retval = cpuidle_register_device(&pr->power.dev); if (retval) { if (acpi_processor_registered == 0) cpuidle_unregister_driver(&acpi_idle_driver); @@ -1294,15 +1283,14 @@ int __cpuinit acpi_processor_power_init(struct acpi_processor *pr) return 0; } -int acpi_processor_power_exit(struct acpi_processor *pr) +int acpi_processor_power_exit(struct acpi_processor *pr, + struct acpi_device *device) { - struct cpuidle_device *dev = per_cpu(acpi_cpuidle_device, pr->id); - if (disabled_by_idle_boot_param()) return 0; if (pr->flags.power) { - cpuidle_unregister_device(dev); + cpuidle_unregister_device(&pr->power.dev); acpi_processor_registered--; if (acpi_processor_registered == 0) cpuidle_unregister_driver(&acpi_idle_driver); diff --git a/trunk/drivers/acpi/processor_perflib.c b/trunk/drivers/acpi/processor_perflib.c index 836bfe069042..a093dc163a42 100644 --- a/trunk/drivers/acpi/processor_perflib.c +++ b/trunk/drivers/acpi/processor_perflib.c @@ -324,34 +324,6 @@ static int acpi_processor_get_performance_control(struct acpi_processor *pr) return result; } -#ifdef CONFIG_X86 -/* - * Some AMDs have 50MHz frequency multiples, but only provide 100MHz rounding - * in their ACPI data. Calculate the real values and fix up the _PSS data. - */ -static void amd_fixup_frequency(struct acpi_processor_px *px, int i) -{ - u32 hi, lo, fid, did; - int index = px->control & 0x00000007; - - if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD) - return; - - if ((boot_cpu_data.x86 == 0x10 && boot_cpu_data.x86_model < 10) - || boot_cpu_data.x86 == 0x11) { - rdmsr(MSR_AMD_PSTATE_DEF_BASE + index, lo, hi); - fid = lo & 0x3f; - did = (lo >> 6) & 7; - if (boot_cpu_data.x86 == 0x10) - px->core_frequency = (100 * (fid + 0x10)) >> did; - else - px->core_frequency = (100 * (fid + 8)) >> did; - } -} -#else -static void amd_fixup_frequency(struct acpi_processor_px *px, int i) {}; -#endif - static int acpi_processor_get_performance_states(struct acpi_processor *pr) { int result = 0; @@ -407,8 +379,6 @@ static int acpi_processor_get_performance_states(struct acpi_processor *pr) goto end; } - amd_fixup_frequency(px, i); - ACPI_DEBUG_PRINT((ACPI_DB_INFO, "State [%d]: core_frequency[%d] power[%d] transition_latency[%d] bus_master_latency[%d] control[0x%x] status[0x%x]\n", i, diff --git a/trunk/drivers/base/platform.c b/trunk/drivers/base/platform.c index d51514b79efe..a1a722502587 100644 --- a/trunk/drivers/base/platform.c +++ b/trunk/drivers/base/platform.c @@ -22,7 +22,6 @@ #include #include "base.h" -#include "power/power.h" #define to_platform_driver(drv) (container_of((drv), struct platform_driver, \ driver)) @@ -949,7 +948,6 @@ void __init early_platform_add_devices(struct platform_device **devs, int num) dev = &devs[i]->dev; if (!dev->devres_head.next) { - pm_runtime_early_init(dev); INIT_LIST_HEAD(&dev->devres_head); list_add_tail(&dev->devres_head, &early_platform_device_list); diff --git a/trunk/drivers/base/power/domain.c b/trunk/drivers/base/power/domain.c index c22b869245d9..ba3487c9835b 100644 --- a/trunk/drivers/base/power/domain.c +++ b/trunk/drivers/base/power/domain.c @@ -53,24 +53,6 @@ static LIST_HEAD(gpd_list); static DEFINE_MUTEX(gpd_list_lock); -static struct generic_pm_domain *pm_genpd_lookup_name(const char *domain_name) -{ - struct generic_pm_domain *genpd = NULL, *gpd; - - if (IS_ERR_OR_NULL(domain_name)) - return NULL; - - mutex_lock(&gpd_list_lock); - list_for_each_entry(gpd, &gpd_list, gpd_list_node) { - if (!strcmp(gpd->name, domain_name)) { - genpd = gpd; - break; - } - } - mutex_unlock(&gpd_list_lock); - return genpd; -} - #ifdef CONFIG_PM struct generic_pm_domain *dev_to_genpd(struct device *dev) @@ -274,28 +256,10 @@ int pm_genpd_poweron(struct generic_pm_domain *genpd) return ret; } -/** - * pm_genpd_name_poweron - Restore power to a given PM domain and its masters. - * @domain_name: Name of the PM domain to power up. - */ -int pm_genpd_name_poweron(const char *domain_name) -{ - struct generic_pm_domain *genpd; - - genpd = pm_genpd_lookup_name(domain_name); - return genpd ? pm_genpd_poweron(genpd) : -EINVAL; -} - #endif /* CONFIG_PM */ #ifdef CONFIG_PM_RUNTIME -static int genpd_start_dev_no_timing(struct generic_pm_domain *genpd, - struct device *dev) -{ - return GENPD_DEV_CALLBACK(genpd, int, start, dev); -} - static int genpd_save_dev(struct generic_pm_domain *genpd, struct device *dev) { return GENPD_DEV_TIMED_CALLBACK(genpd, int, save_state, dev, @@ -472,7 +436,7 @@ static int pm_genpd_poweroff(struct generic_pm_domain *genpd) not_suspended = 0; list_for_each_entry(pdd, &genpd->dev_list, list_node) if (pdd->dev->driver && (!pm_runtime_suspended(pdd->dev) - || pdd->dev->power.irq_safe)) + || pdd->dev->power.irq_safe || to_gpd_data(pdd)->always_on)) not_suspended++; if (not_suspended > genpd->in_progress) @@ -614,6 +578,9 @@ static int pm_genpd_runtime_suspend(struct device *dev) might_sleep_if(!genpd->dev_irq_safe); + if (dev_gpd_data(dev)->always_on) + return -EBUSY; + stop_ok = genpd->gov ? genpd->gov->stop_ok : NULL; if (stop_ok && !stop_ok(dev)) return -EBUSY; @@ -662,7 +629,7 @@ static int pm_genpd_runtime_resume(struct device *dev) /* If power.irq_safe, the PM domain is never powered off. */ if (dev->power.irq_safe) - return genpd_start_dev_no_timing(genpd, dev); + return genpd_start_dev(genpd, dev); mutex_lock(&genpd->lock); ret = __pm_genpd_poweron(genpd); @@ -730,24 +697,6 @@ static inline void genpd_power_off_work_fn(struct work_struct *work) {} #ifdef CONFIG_PM_SLEEP -/** - * pm_genpd_present - Check if the given PM domain has been initialized. - * @genpd: PM domain to check. - */ -static bool pm_genpd_present(struct generic_pm_domain *genpd) -{ - struct generic_pm_domain *gpd; - - if (IS_ERR_OR_NULL(genpd)) - return false; - - list_for_each_entry(gpd, &gpd_list, gpd_list_node) - if (gpd == genpd) - return true; - - return false; -} - static bool genpd_dev_active_wakeup(struct generic_pm_domain *genpd, struct device *dev) { @@ -801,10 +750,9 @@ static int genpd_thaw_dev(struct generic_pm_domain *genpd, struct device *dev) * Check if the given PM domain can be powered off (during system suspend or * hibernation) and do that if so. Also, in that case propagate to its masters. * - * This function is only called in "noirq" and "syscore" stages of system power - * transitions, so it need not acquire locks (all of the "noirq" callbacks are - * executed sequentially, so it is guaranteed that it will never run twice in - * parallel). + * This function is only called in "noirq" stages of system power transitions, + * so it need not acquire locks (all of the "noirq" callbacks are executed + * sequentially, so it is guaranteed that it will never run twice in parallel). */ static void pm_genpd_sync_poweroff(struct generic_pm_domain *genpd) { @@ -828,33 +776,6 @@ static void pm_genpd_sync_poweroff(struct generic_pm_domain *genpd) } } -/** - * pm_genpd_sync_poweron - Synchronously power on a PM domain and its masters. - * @genpd: PM domain to power on. - * - * This function is only called in "noirq" and "syscore" stages of system power - * transitions, so it need not acquire locks (all of the "noirq" callbacks are - * executed sequentially, so it is guaranteed that it will never run twice in - * parallel). - */ -static void pm_genpd_sync_poweron(struct generic_pm_domain *genpd) -{ - struct gpd_link *link; - - if (genpd->status != GPD_STATE_POWER_OFF) - return; - - list_for_each_entry(link, &genpd->slave_links, slave_node) { - pm_genpd_sync_poweron(link->master); - genpd_sd_counter_inc(link->master); - } - - if (genpd->power_on) - genpd->power_on(genpd); - - genpd->status = GPD_STATE_ACTIVE; -} - /** * resume_needed - Check whether to resume a device before system suspend. * @dev: Device to check. @@ -1016,7 +937,7 @@ static int pm_genpd_suspend_noirq(struct device *dev) if (IS_ERR(genpd)) return -EINVAL; - if (genpd->suspend_power_off + if (genpd->suspend_power_off || dev_gpd_data(dev)->always_on || (dev->power.wakeup_path && genpd_dev_active_wakeup(genpd, dev))) return 0; @@ -1049,7 +970,7 @@ static int pm_genpd_resume_noirq(struct device *dev) if (IS_ERR(genpd)) return -EINVAL; - if (genpd->suspend_power_off + if (genpd->suspend_power_off || dev_gpd_data(dev)->always_on || (dev->power.wakeup_path && genpd_dev_active_wakeup(genpd, dev))) return 0; @@ -1058,7 +979,7 @@ static int pm_genpd_resume_noirq(struct device *dev) * guaranteed that this function will never run twice in parallel for * the same PM domain, so it is not necessary to use locking here. */ - pm_genpd_sync_poweron(genpd); + pm_genpd_poweron(genpd); genpd->suspended_count--; return genpd_start_dev(genpd, dev); @@ -1169,7 +1090,8 @@ static int pm_genpd_freeze_noirq(struct device *dev) if (IS_ERR(genpd)) return -EINVAL; - return genpd->suspend_power_off ? 0 : genpd_stop_dev(genpd, dev); + return genpd->suspend_power_off || dev_gpd_data(dev)->always_on ? + 0 : genpd_stop_dev(genpd, dev); } /** @@ -1189,7 +1111,8 @@ static int pm_genpd_thaw_noirq(struct device *dev) if (IS_ERR(genpd)) return -EINVAL; - return genpd->suspend_power_off ? 0 : genpd_start_dev(genpd, dev); + return genpd->suspend_power_off || dev_gpd_data(dev)->always_on ? + 0 : genpd_start_dev(genpd, dev); } /** @@ -1263,8 +1186,8 @@ static int pm_genpd_restore_noirq(struct device *dev) if (genpd->suspended_count++ == 0) { /* * The boot kernel might put the domain into arbitrary state, - * so make it appear as powered off to pm_genpd_sync_poweron(), - * so that it tries to power it on in case it was really off. + * so make it appear as powered off to pm_genpd_poweron(), so + * that it tries to power it on in case it was really off. */ genpd->status = GPD_STATE_POWER_OFF; if (genpd->suspend_power_off) { @@ -1282,9 +1205,9 @@ static int pm_genpd_restore_noirq(struct device *dev) if (genpd->suspend_power_off) return 0; - pm_genpd_sync_poweron(genpd); + pm_genpd_poweron(genpd); - return genpd_start_dev(genpd, dev); + return dev_gpd_data(dev)->always_on ? 0 : genpd_start_dev(genpd, dev); } /** @@ -1323,31 +1246,6 @@ static void pm_genpd_complete(struct device *dev) } } -/** - * pm_genpd_syscore_switch - Switch power during system core suspend or resume. - * @dev: Device that normally is marked as "always on" to switch power for. - * - * This routine may only be called during the system core (syscore) suspend or - * resume phase for devices whose "always on" flags are set. - */ -void pm_genpd_syscore_switch(struct device *dev, bool suspend) -{ - struct generic_pm_domain *genpd; - - genpd = dev_to_genpd(dev); - if (!pm_genpd_present(genpd)) - return; - - if (suspend) { - genpd->suspended_count++; - pm_genpd_sync_poweroff(genpd); - } else { - pm_genpd_sync_poweron(genpd); - genpd->suspended_count--; - } -} -EXPORT_SYMBOL_GPL(pm_genpd_syscore_switch); - #else #define pm_genpd_prepare NULL @@ -1495,19 +1393,6 @@ int __pm_genpd_of_add_device(struct device_node *genpd_node, struct device *dev, return __pm_genpd_add_device(genpd, dev, td); } - -/** - * __pm_genpd_name_add_device - Find I/O PM domain and add a device to it. - * @domain_name: Name of the PM domain to add the device to. - * @dev: Device to be added. - * @td: Set of PM QoS timing parameters to attach to the device. - */ -int __pm_genpd_name_add_device(const char *domain_name, struct device *dev, - struct gpd_timing_data *td) -{ - return __pm_genpd_add_device(pm_genpd_lookup_name(domain_name), dev, td); -} - /** * pm_genpd_remove_device - Remove a device from an I/O PM domain. * @genpd: PM domain to remove the device from. @@ -1569,6 +1454,26 @@ int pm_genpd_remove_device(struct generic_pm_domain *genpd, return ret; } +/** + * pm_genpd_dev_always_on - Set/unset the "always on" flag for a given device. + * @dev: Device to set/unset the flag for. + * @val: The new value of the device's "always on" flag. + */ +void pm_genpd_dev_always_on(struct device *dev, bool val) +{ + struct pm_subsys_data *psd; + unsigned long flags; + + spin_lock_irqsave(&dev->power.lock, flags); + + psd = dev_to_psd(dev); + if (psd && psd->domain_data) + to_gpd_data(psd->domain_data)->always_on = val; + + spin_unlock_irqrestore(&dev->power.lock, flags); +} +EXPORT_SYMBOL_GPL(pm_genpd_dev_always_on); + /** * pm_genpd_dev_need_restore - Set/unset the device's "need restore" flag. * @dev: Device to set/unset the flag for. @@ -1600,8 +1505,7 @@ int pm_genpd_add_subdomain(struct generic_pm_domain *genpd, struct gpd_link *link; int ret = 0; - if (IS_ERR_OR_NULL(genpd) || IS_ERR_OR_NULL(subdomain) - || genpd == subdomain) + if (IS_ERR_OR_NULL(genpd) || IS_ERR_OR_NULL(subdomain)) return -EINVAL; start: @@ -1647,35 +1551,6 @@ int pm_genpd_add_subdomain(struct generic_pm_domain *genpd, return ret; } -/** - * pm_genpd_add_subdomain_names - Add a subdomain to an I/O PM domain. - * @master_name: Name of the master PM domain to add the subdomain to. - * @subdomain_name: Name of the subdomain to be added. - */ -int pm_genpd_add_subdomain_names(const char *master_name, - const char *subdomain_name) -{ - struct generic_pm_domain *master = NULL, *subdomain = NULL, *gpd; - - if (IS_ERR_OR_NULL(master_name) || IS_ERR_OR_NULL(subdomain_name)) - return -EINVAL; - - mutex_lock(&gpd_list_lock); - list_for_each_entry(gpd, &gpd_list, gpd_list_node) { - if (!master && !strcmp(gpd->name, master_name)) - master = gpd; - - if (!subdomain && !strcmp(gpd->name, subdomain_name)) - subdomain = gpd; - - if (master && subdomain) - break; - } - mutex_unlock(&gpd_list_lock); - - return pm_genpd_add_subdomain(master, subdomain); -} - /** * pm_genpd_remove_subdomain - Remove a subdomain from an I/O PM domain. * @genpd: Master PM domain to remove the subdomain from. @@ -1829,16 +1704,7 @@ int __pm_genpd_remove_callbacks(struct device *dev, bool clear_td) } EXPORT_SYMBOL_GPL(__pm_genpd_remove_callbacks); -/** - * pm_genpd_attach_cpuidle - Connect the given PM domain with cpuidle. - * @genpd: PM domain to be connected with cpuidle. - * @state: cpuidle state this domain can disable/enable. - * - * Make a PM domain behave as though it contained a CPU core, that is, instead - * of calling its power down routine it will enable the given cpuidle state so - * that the cpuidle subsystem can power it down (if possible and desirable). - */ -int pm_genpd_attach_cpuidle(struct generic_pm_domain *genpd, int state) +int genpd_attach_cpuidle(struct generic_pm_domain *genpd, int state) { struct cpuidle_driver *cpuidle_drv; struct gpd_cpu_data *cpu_data; @@ -1887,24 +1753,7 @@ int pm_genpd_attach_cpuidle(struct generic_pm_domain *genpd, int state) goto out; } -/** - * pm_genpd_name_attach_cpuidle - Find PM domain and connect cpuidle to it. - * @name: Name of the domain to connect to cpuidle. - * @state: cpuidle state this domain can manipulate. - */ -int pm_genpd_name_attach_cpuidle(const char *name, int state) -{ - return pm_genpd_attach_cpuidle(pm_genpd_lookup_name(name), state); -} - -/** - * pm_genpd_detach_cpuidle - Remove the cpuidle connection from a PM domain. - * @genpd: PM domain to remove the cpuidle connection from. - * - * Remove the cpuidle connection set up by pm_genpd_attach_cpuidle() from the - * given PM domain. - */ -int pm_genpd_detach_cpuidle(struct generic_pm_domain *genpd) +int genpd_detach_cpuidle(struct generic_pm_domain *genpd) { struct gpd_cpu_data *cpu_data; struct cpuidle_state *idle_state; @@ -1935,15 +1784,6 @@ int pm_genpd_detach_cpuidle(struct generic_pm_domain *genpd) return ret; } -/** - * pm_genpd_name_detach_cpuidle - Find PM domain and disconnect cpuidle from it. - * @name: Name of the domain to disconnect cpuidle from. - */ -int pm_genpd_name_detach_cpuidle(const char *name) -{ - return pm_genpd_detach_cpuidle(pm_genpd_lookup_name(name)); -} - /* Default device callbacks for generic PM domains. */ /** diff --git a/trunk/drivers/base/power/main.c b/trunk/drivers/base/power/main.c index 57f5814c2732..0113adc310dc 100644 --- a/trunk/drivers/base/power/main.c +++ b/trunk/drivers/base/power/main.c @@ -57,17 +57,20 @@ static pm_message_t pm_transition; static int async_error; /** - * device_pm_sleep_init - Initialize system suspend-related device fields. + * device_pm_init - Initialize the PM-related part of a device object. * @dev: Device object being initialized. */ -void device_pm_sleep_init(struct device *dev) +void device_pm_init(struct device *dev) { dev->power.is_prepared = false; dev->power.is_suspended = false; init_completion(&dev->power.completion); complete_all(&dev->power.completion); dev->power.wakeup = NULL; + spin_lock_init(&dev->power.lock); + pm_runtime_init(dev); INIT_LIST_HEAD(&dev->power.entry); + dev->power.power_state = PMSG_INVALID; } /** @@ -405,9 +408,6 @@ static int device_resume_noirq(struct device *dev, pm_message_t state) TRACE_DEVICE(dev); TRACE_RESUME(0); - if (dev->power.syscore) - goto Out; - if (dev->pm_domain) { info = "noirq power domain "; callback = pm_noirq_op(&dev->pm_domain->ops, state); @@ -429,7 +429,6 @@ static int device_resume_noirq(struct device *dev, pm_message_t state) error = dpm_run_callback(callback, dev, state, info); - Out: TRACE_RESUME(error); return error; } @@ -487,9 +486,6 @@ static int device_resume_early(struct device *dev, pm_message_t state) TRACE_DEVICE(dev); TRACE_RESUME(0); - if (dev->power.syscore) - goto Out; - if (dev->pm_domain) { info = "early power domain "; callback = pm_late_early_op(&dev->pm_domain->ops, state); @@ -511,7 +507,6 @@ static int device_resume_early(struct device *dev, pm_message_t state) error = dpm_run_callback(callback, dev, state, info); - Out: TRACE_RESUME(error); return error; } @@ -575,9 +570,6 @@ static int device_resume(struct device *dev, pm_message_t state, bool async) TRACE_DEVICE(dev); TRACE_RESUME(0); - if (dev->power.syscore) - goto Complete; - dpm_wait(dev->parent, async); device_lock(dev); @@ -640,8 +632,6 @@ static int device_resume(struct device *dev, pm_message_t state, bool async) Unlock: device_unlock(dev); - - Complete: complete_all(&dev->power.completion); TRACE_RESUME(error); @@ -732,9 +722,6 @@ static void device_complete(struct device *dev, pm_message_t state) void (*callback)(struct device *) = NULL; char *info = NULL; - if (dev->power.syscore) - return; - device_lock(dev); if (dev->pm_domain) { @@ -847,9 +834,6 @@ static int device_suspend_noirq(struct device *dev, pm_message_t state) pm_callback_t callback = NULL; char *info = NULL; - if (dev->power.syscore) - return 0; - if (dev->pm_domain) { info = "noirq power domain "; callback = pm_noirq_op(&dev->pm_domain->ops, state); @@ -933,9 +917,6 @@ static int device_suspend_late(struct device *dev, pm_message_t state) pm_callback_t callback = NULL; char *info = NULL; - if (dev->power.syscore) - return 0; - if (dev->pm_domain) { info = "late power domain "; callback = pm_late_early_op(&dev->pm_domain->ops, state); @@ -1072,9 +1053,6 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async) goto Complete; } - if (dev->power.syscore) - goto Complete; - device_lock(dev); if (dev->pm_domain) { @@ -1231,9 +1209,6 @@ static int device_prepare(struct device *dev, pm_message_t state) char *info = NULL; int error = 0; - if (dev->power.syscore) - return 0; - device_lock(dev); dev->power.wakeup_path = device_may_wakeup(dev); diff --git a/trunk/drivers/base/power/opp.c b/trunk/drivers/base/power/opp.c index d9468642fc41..ac993eafec82 100644 --- a/trunk/drivers/base/power/opp.c +++ b/trunk/drivers/base/power/opp.c @@ -22,7 +22,6 @@ #include #include #include -#include /* * Internal data structure organization with the OPP layer library is as @@ -675,49 +674,3 @@ struct srcu_notifier_head *opp_get_notifier(struct device *dev) return &dev_opp->head; } - -#ifdef CONFIG_OF -/** - * of_init_opp_table() - Initialize opp table from device tree - * @dev: device pointer used to lookup device OPPs. - * - * Register the initial OPP table with the OPP library for given device. - */ -int of_init_opp_table(struct device *dev) -{ - const struct property *prop; - const __be32 *val; - int nr; - - prop = of_find_property(dev->of_node, "operating-points", NULL); - if (!prop) - return -ENODEV; - if (!prop->value) - return -ENODATA; - - /* - * Each OPP is a set of tuples consisting of frequency and - * voltage like . - */ - nr = prop->length / sizeof(u32); - if (nr % 2) { - dev_err(dev, "%s: Invalid OPP list\n", __func__); - return -EINVAL; - } - - val = prop->value; - while (nr) { - unsigned long freq = be32_to_cpup(val++) * 1000; - unsigned long volt = be32_to_cpup(val++); - - if (opp_add(dev, freq, volt)) { - dev_warn(dev, "%s: Failed to add OPP %ld\n", - __func__, freq); - continue; - } - nr -= 2; - } - - return 0; -} -#endif diff --git a/trunk/drivers/base/power/power.h b/trunk/drivers/base/power/power.h index 0dbfdf4419af..eeb4bff9505c 100644 --- a/trunk/drivers/base/power/power.h +++ b/trunk/drivers/base/power/power.h @@ -1,32 +1,12 @@ #include -static inline void device_pm_init_common(struct device *dev) -{ - if (!dev->power.early_init) { - spin_lock_init(&dev->power.lock); - dev->power.power_state = PMSG_INVALID; - dev->power.early_init = true; - } -} - #ifdef CONFIG_PM_RUNTIME -static inline void pm_runtime_early_init(struct device *dev) -{ - dev->power.disable_depth = 1; - device_pm_init_common(dev); -} - extern void pm_runtime_init(struct device *dev); extern void pm_runtime_remove(struct device *dev); #else /* !CONFIG_PM_RUNTIME */ -static inline void pm_runtime_early_init(struct device *dev) -{ - device_pm_init_common(dev); -} - static inline void pm_runtime_init(struct device *dev) {} static inline void pm_runtime_remove(struct device *dev) {} @@ -45,7 +25,7 @@ static inline struct device *to_device(struct list_head *entry) return container_of(entry, struct device, power.entry); } -extern void device_pm_sleep_init(struct device *dev); +extern void device_pm_init(struct device *dev); extern void device_pm_add(struct device *); extern void device_pm_remove(struct device *); extern void device_pm_move_before(struct device *, struct device *); @@ -54,7 +34,12 @@ extern void device_pm_move_last(struct device *); #else /* !CONFIG_PM_SLEEP */ -static inline void device_pm_sleep_init(struct device *dev) {} +static inline void device_pm_init(struct device *dev) +{ + spin_lock_init(&dev->power.lock); + dev->power.power_state = PMSG_INVALID; + pm_runtime_init(dev); +} static inline void device_pm_add(struct device *dev) { @@ -75,13 +60,6 @@ static inline void device_pm_move_last(struct device *dev) {} #endif /* !CONFIG_PM_SLEEP */ -static inline void device_pm_init(struct device *dev) -{ - device_pm_init_common(dev); - device_pm_sleep_init(dev); - pm_runtime_init(dev); -} - #ifdef CONFIG_PM /* diff --git a/trunk/drivers/base/power/qos.c b/trunk/drivers/base/power/qos.c index 968a77145e81..74a67e0019a2 100644 --- a/trunk/drivers/base/power/qos.c +++ b/trunk/drivers/base/power/qos.c @@ -24,32 +24,26 @@ * . a system-wide notification callback using the dev_pm_qos_*_global_notifier * API. The notification chain data is stored in a static variable. * - * Notes about the per-device constraint data struct allocation: - * . The per-device constraints data struct ptr is stored into the device + * Note about the per-device constraint data struct allocation: + * . The per-device constraints data struct ptr is tored into the device * dev_pm_info. * . To minimize the data usage by the per-device constraints, the data struct - * is only allocated at the first call to dev_pm_qos_add_request. + * is only allocated at the first call to dev_pm_qos_add_request. * . The data is later free'd when the device is removed from the system. - * - * Notes about locking: - * . The dev->power.lock lock protects the constraints list - * (dev->power.constraints) allocation and free, as triggered by the - * driver core code at device insertion and removal, - * . A global lock dev_pm_qos_lock protects the constraints list entries - * from any modification and the notifiers registration and unregistration. - * . For both locks a spinlock is needed since this code can be called from - * interrupt context or spinlock protected context. + * . A global mutex protects the constraints users from the data being + * allocated and free'd. */ #include #include #include #include +#include #include #include "power.h" -static DEFINE_SPINLOCK(dev_pm_qos_lock); +static DEFINE_MUTEX(dev_pm_qos_mtx); static BLOCKING_NOTIFIER_HEAD(dev_pm_notifiers); @@ -116,19 +110,18 @@ static int apply_constraint(struct dev_pm_qos_request *req, * @dev: device to allocate data for * * Called at the first call to add_request, for constraint data allocation - * Must be called with the dev_pm_qos_lock lock held + * Must be called with the dev_pm_qos_mtx mutex held */ static int dev_pm_qos_constraints_allocate(struct device *dev) { struct pm_qos_constraints *c; struct blocking_notifier_head *n; - unsigned long flags; - c = kzalloc(sizeof(*c), GFP_ATOMIC); + c = kzalloc(sizeof(*c), GFP_KERNEL); if (!c) return -ENOMEM; - n = kzalloc(sizeof(*n), GFP_ATOMIC); + n = kzalloc(sizeof(*n), GFP_KERNEL); if (!n) { kfree(c); return -ENOMEM; @@ -141,9 +134,9 @@ static int dev_pm_qos_constraints_allocate(struct device *dev) c->type = PM_QOS_MIN; c->notifiers = n; - spin_lock_irqsave(&dev->power.lock, flags); + spin_lock_irq(&dev->power.lock); dev->power.constraints = c; - spin_unlock_irqrestore(&dev->power.lock, flags); + spin_unlock_irq(&dev->power.lock); return 0; } @@ -157,12 +150,10 @@ static int dev_pm_qos_constraints_allocate(struct device *dev) */ void dev_pm_qos_constraints_init(struct device *dev) { - unsigned long flags; - - spin_lock_irqsave(&dev_pm_qos_lock, flags); + mutex_lock(&dev_pm_qos_mtx); dev->power.constraints = NULL; dev->power.power_state = PMSG_ON; - spin_unlock_irqrestore(&dev_pm_qos_lock, flags); + mutex_unlock(&dev_pm_qos_mtx); } /** @@ -175,7 +166,6 @@ void dev_pm_qos_constraints_destroy(struct device *dev) { struct dev_pm_qos_request *req, *tmp; struct pm_qos_constraints *c; - unsigned long flags; /* * If the device's PM QoS resume latency limit has been exposed to user @@ -183,7 +173,7 @@ void dev_pm_qos_constraints_destroy(struct device *dev) */ dev_pm_qos_hide_latency_limit(dev); - spin_lock_irqsave(&dev_pm_qos_lock, flags); + mutex_lock(&dev_pm_qos_mtx); dev->power.power_state = PMSG_INVALID; c = dev->power.constraints; @@ -208,7 +198,7 @@ void dev_pm_qos_constraints_destroy(struct device *dev) kfree(c); out: - spin_unlock_irqrestore(&dev_pm_qos_lock, flags); + mutex_unlock(&dev_pm_qos_mtx); } /** @@ -233,7 +223,6 @@ int dev_pm_qos_add_request(struct device *dev, struct dev_pm_qos_request *req, s32 value) { int ret = 0; - unsigned long flags; if (!dev || !req) /*guard against callers passing in null */ return -EINVAL; @@ -244,7 +233,7 @@ int dev_pm_qos_add_request(struct device *dev, struct dev_pm_qos_request *req, req->dev = dev; - spin_lock_irqsave(&dev_pm_qos_lock, flags); + mutex_lock(&dev_pm_qos_mtx); if (!dev->power.constraints) { if (dev->power.power_state.event == PM_EVENT_INVALID) { @@ -266,7 +255,7 @@ int dev_pm_qos_add_request(struct device *dev, struct dev_pm_qos_request *req, ret = apply_constraint(req, PM_QOS_ADD_REQ, value); out: - spin_unlock_irqrestore(&dev_pm_qos_lock, flags); + mutex_unlock(&dev_pm_qos_mtx); return ret; } @@ -291,7 +280,6 @@ int dev_pm_qos_update_request(struct dev_pm_qos_request *req, s32 new_value) { int ret = 0; - unsigned long flags; if (!req) /*guard against callers passing in null */ return -EINVAL; @@ -300,7 +288,7 @@ int dev_pm_qos_update_request(struct dev_pm_qos_request *req, "%s() called for unknown object\n", __func__)) return -EINVAL; - spin_lock_irqsave(&dev_pm_qos_lock, flags); + mutex_lock(&dev_pm_qos_mtx); if (req->dev->power.constraints) { if (new_value != req->node.prio) @@ -311,7 +299,7 @@ int dev_pm_qos_update_request(struct dev_pm_qos_request *req, ret = -ENODEV; } - spin_unlock_irqrestore(&dev_pm_qos_lock, flags); + mutex_unlock(&dev_pm_qos_mtx); return ret; } EXPORT_SYMBOL_GPL(dev_pm_qos_update_request); @@ -331,7 +319,6 @@ EXPORT_SYMBOL_GPL(dev_pm_qos_update_request); int dev_pm_qos_remove_request(struct dev_pm_qos_request *req) { int ret = 0; - unsigned long flags; if (!req) /*guard against callers passing in null */ return -EINVAL; @@ -340,7 +327,7 @@ int dev_pm_qos_remove_request(struct dev_pm_qos_request *req) "%s() called for unknown object\n", __func__)) return -EINVAL; - spin_lock_irqsave(&dev_pm_qos_lock, flags); + mutex_lock(&dev_pm_qos_mtx); if (req->dev->power.constraints) { ret = apply_constraint(req, PM_QOS_REMOVE_REQ, @@ -351,7 +338,7 @@ int dev_pm_qos_remove_request(struct dev_pm_qos_request *req) ret = -ENODEV; } - spin_unlock_irqrestore(&dev_pm_qos_lock, flags); + mutex_unlock(&dev_pm_qos_mtx); return ret; } EXPORT_SYMBOL_GPL(dev_pm_qos_remove_request); @@ -372,9 +359,8 @@ EXPORT_SYMBOL_GPL(dev_pm_qos_remove_request); int dev_pm_qos_add_notifier(struct device *dev, struct notifier_block *notifier) { int ret = 0; - unsigned long flags; - spin_lock_irqsave(&dev_pm_qos_lock, flags); + mutex_lock(&dev_pm_qos_mtx); if (!dev->power.constraints) ret = dev->power.power_state.event != PM_EVENT_INVALID ? @@ -384,7 +370,7 @@ int dev_pm_qos_add_notifier(struct device *dev, struct notifier_block *notifier) ret = blocking_notifier_chain_register( dev->power.constraints->notifiers, notifier); - spin_unlock_irqrestore(&dev_pm_qos_lock, flags); + mutex_unlock(&dev_pm_qos_mtx); return ret; } EXPORT_SYMBOL_GPL(dev_pm_qos_add_notifier); @@ -403,9 +389,8 @@ int dev_pm_qos_remove_notifier(struct device *dev, struct notifier_block *notifier) { int retval = 0; - unsigned long flags; - spin_lock_irqsave(&dev_pm_qos_lock, flags); + mutex_lock(&dev_pm_qos_mtx); /* Silently return if the constraints object is not present. */ if (dev->power.constraints) @@ -413,7 +398,7 @@ int dev_pm_qos_remove_notifier(struct device *dev, dev->power.constraints->notifiers, notifier); - spin_unlock_irqrestore(&dev_pm_qos_lock, flags); + mutex_unlock(&dev_pm_qos_mtx); return retval; } EXPORT_SYMBOL_GPL(dev_pm_qos_remove_notifier); diff --git a/trunk/drivers/base/power/runtime.c b/trunk/drivers/base/power/runtime.c index 7d9c1cb1c39a..3148b10dc2e5 100644 --- a/trunk/drivers/base/power/runtime.c +++ b/trunk/drivers/base/power/runtime.c @@ -509,6 +509,9 @@ static int rpm_resume(struct device *dev, int rpmflags) repeat: if (dev->power.runtime_error) retval = -EINVAL; + else if (dev->power.disable_depth == 1 && dev->power.is_suspended + && dev->power.runtime_status == RPM_ACTIVE) + retval = 1; else if (dev->power.disable_depth > 0) retval = -EACCES; if (retval) diff --git a/trunk/drivers/base/power/wakeup.c b/trunk/drivers/base/power/wakeup.c index e6ee5e80e546..cbb463b3a750 100644 --- a/trunk/drivers/base/power/wakeup.c +++ b/trunk/drivers/base/power/wakeup.c @@ -127,8 +127,6 @@ EXPORT_SYMBOL_GPL(wakeup_source_destroy); */ void wakeup_source_add(struct wakeup_source *ws) { - unsigned long flags; - if (WARN_ON(!ws)) return; @@ -137,9 +135,9 @@ void wakeup_source_add(struct wakeup_source *ws) ws->active = false; ws->last_time = ktime_get(); - spin_lock_irqsave(&events_lock, flags); + spin_lock_irq(&events_lock); list_add_rcu(&ws->entry, &wakeup_sources); - spin_unlock_irqrestore(&events_lock, flags); + spin_unlock_irq(&events_lock); } EXPORT_SYMBOL_GPL(wakeup_source_add); @@ -149,14 +147,12 @@ EXPORT_SYMBOL_GPL(wakeup_source_add); */ void wakeup_source_remove(struct wakeup_source *ws) { - unsigned long flags; - if (WARN_ON(!ws)) return; - spin_lock_irqsave(&events_lock, flags); + spin_lock_irq(&events_lock); list_del_rcu(&ws->entry); - spin_unlock_irqrestore(&events_lock, flags); + spin_unlock_irq(&events_lock); synchronize_rcu(); } EXPORT_SYMBOL_GPL(wakeup_source_remove); @@ -653,31 +649,6 @@ void pm_wakeup_event(struct device *dev, unsigned int msec) } EXPORT_SYMBOL_GPL(pm_wakeup_event); -static void print_active_wakeup_sources(void) -{ - struct wakeup_source *ws; - int active = 0; - struct wakeup_source *last_activity_ws = NULL; - - rcu_read_lock(); - list_for_each_entry_rcu(ws, &wakeup_sources, entry) { - if (ws->active) { - pr_info("active wakeup source: %s\n", ws->name); - active = 1; - } else if (!active && - (!last_activity_ws || - ktime_to_ns(ws->last_time) > - ktime_to_ns(last_activity_ws->last_time))) { - last_activity_ws = ws; - } - } - - if (!active && last_activity_ws) - pr_info("last active wakeup source: %s\n", - last_activity_ws->name); - rcu_read_unlock(); -} - /** * pm_wakeup_pending - Check if power transition in progress should be aborted. * @@ -700,10 +671,6 @@ bool pm_wakeup_pending(void) events_check_enabled = !ret; } spin_unlock_irqrestore(&events_lock, flags); - - if (ret) - print_active_wakeup_sources(); - return ret; } @@ -756,16 +723,15 @@ bool pm_get_wakeup_count(unsigned int *count, bool block) bool pm_save_wakeup_count(unsigned int count) { unsigned int cnt, inpr; - unsigned long flags; events_check_enabled = false; - spin_lock_irqsave(&events_lock, flags); + spin_lock_irq(&events_lock); split_counters(&cnt, &inpr); if (cnt == count && inpr == 0) { saved_count = count; events_check_enabled = true; } - spin_unlock_irqrestore(&events_lock, flags); + spin_unlock_irq(&events_lock); return events_check_enabled; } diff --git a/trunk/drivers/clocksource/sh_cmt.c b/trunk/drivers/clocksource/sh_cmt.c index a5f7829f2799..98b06baafcc6 100644 --- a/trunk/drivers/clocksource/sh_cmt.c +++ b/trunk/drivers/clocksource/sh_cmt.c @@ -33,7 +33,6 @@ #include #include #include -#include struct sh_cmt_priv { void __iomem *mapbase; @@ -53,7 +52,6 @@ struct sh_cmt_priv { struct clock_event_device ced; struct clocksource cs; unsigned long total_cycles; - bool cs_enabled; }; static DEFINE_RAW_SPINLOCK(sh_cmt_lock); @@ -157,9 +155,6 @@ static int sh_cmt_enable(struct sh_cmt_priv *p, unsigned long *rate) { int k, ret; - pm_runtime_get_sync(&p->pdev->dev); - dev_pm_syscore_device(&p->pdev->dev, true); - /* enable clock */ ret = clk_enable(p->clk); if (ret) { @@ -226,9 +221,6 @@ static void sh_cmt_disable(struct sh_cmt_priv *p) /* stop clock */ clk_disable(p->clk); - - dev_pm_syscore_device(&p->pdev->dev, false); - pm_runtime_put(&p->pdev->dev); } /* private flags */ @@ -459,42 +451,22 @@ static int sh_cmt_clocksource_enable(struct clocksource *cs) int ret; struct sh_cmt_priv *p = cs_to_sh_cmt(cs); - WARN_ON(p->cs_enabled); - p->total_cycles = 0; ret = sh_cmt_start(p, FLAG_CLOCKSOURCE); - if (!ret) { + if (!ret) __clocksource_updatefreq_hz(cs, p->rate); - p->cs_enabled = true; - } return ret; } static void sh_cmt_clocksource_disable(struct clocksource *cs) { - struct sh_cmt_priv *p = cs_to_sh_cmt(cs); - - WARN_ON(!p->cs_enabled); - - sh_cmt_stop(p, FLAG_CLOCKSOURCE); - p->cs_enabled = false; -} - -static void sh_cmt_clocksource_suspend(struct clocksource *cs) -{ - struct sh_cmt_priv *p = cs_to_sh_cmt(cs); - - sh_cmt_stop(p, FLAG_CLOCKSOURCE); - pm_genpd_syscore_poweroff(&p->pdev->dev); + sh_cmt_stop(cs_to_sh_cmt(cs), FLAG_CLOCKSOURCE); } static void sh_cmt_clocksource_resume(struct clocksource *cs) { - struct sh_cmt_priv *p = cs_to_sh_cmt(cs); - - pm_genpd_syscore_poweron(&p->pdev->dev); - sh_cmt_start(p, FLAG_CLOCKSOURCE); + sh_cmt_start(cs_to_sh_cmt(cs), FLAG_CLOCKSOURCE); } static int sh_cmt_register_clocksource(struct sh_cmt_priv *p, @@ -507,7 +479,7 @@ static int sh_cmt_register_clocksource(struct sh_cmt_priv *p, cs->read = sh_cmt_clocksource_read; cs->enable = sh_cmt_clocksource_enable; cs->disable = sh_cmt_clocksource_disable; - cs->suspend = sh_cmt_clocksource_suspend; + cs->suspend = sh_cmt_clocksource_disable; cs->resume = sh_cmt_clocksource_resume; cs->mask = CLOCKSOURCE_MASK(sizeof(unsigned long) * 8); cs->flags = CLOCK_SOURCE_IS_CONTINUOUS; @@ -590,16 +562,6 @@ static int sh_cmt_clock_event_next(unsigned long delta, return 0; } -static void sh_cmt_clock_event_suspend(struct clock_event_device *ced) -{ - pm_genpd_syscore_poweroff(&ced_to_sh_cmt(ced)->pdev->dev); -} - -static void sh_cmt_clock_event_resume(struct clock_event_device *ced) -{ - pm_genpd_syscore_poweron(&ced_to_sh_cmt(ced)->pdev->dev); -} - static void sh_cmt_register_clockevent(struct sh_cmt_priv *p, char *name, unsigned long rating) { @@ -614,8 +576,6 @@ static void sh_cmt_register_clockevent(struct sh_cmt_priv *p, ced->cpumask = cpumask_of(0); ced->set_next_event = sh_cmt_clock_event_next; ced->set_mode = sh_cmt_clock_event_mode; - ced->suspend = sh_cmt_clock_event_suspend; - ced->resume = sh_cmt_clock_event_resume; dev_info(&p->pdev->dev, "used for clock events\n"); clockevents_register_device(ced); @@ -710,7 +670,6 @@ static int sh_cmt_setup(struct sh_cmt_priv *p, struct platform_device *pdev) dev_err(&p->pdev->dev, "registration failed\n"); goto err1; } - p->cs_enabled = false; ret = setup_irq(irq, &p->irqaction); if (ret) { @@ -729,17 +688,14 @@ static int sh_cmt_setup(struct sh_cmt_priv *p, struct platform_device *pdev) static int __devinit sh_cmt_probe(struct platform_device *pdev) { struct sh_cmt_priv *p = platform_get_drvdata(pdev); - struct sh_timer_config *cfg = pdev->dev.platform_data; int ret; - if (!is_early_platform_device(pdev)) { - pm_runtime_set_active(&pdev->dev); - pm_runtime_enable(&pdev->dev); - } + if (!is_early_platform_device(pdev)) + pm_genpd_dev_always_on(&pdev->dev, true); if (p) { dev_info(&pdev->dev, "kept as earlytimer\n"); - goto out; + return 0; } p = kmalloc(sizeof(*p), GFP_KERNEL); @@ -752,19 +708,8 @@ static int __devinit sh_cmt_probe(struct platform_device *pdev) if (ret) { kfree(p); platform_set_drvdata(pdev, NULL); - pm_runtime_idle(&pdev->dev); - return ret; } - if (is_early_platform_device(pdev)) - return 0; - - out: - if (cfg->clockevent_rating || cfg->clocksource_rating) - pm_runtime_irq_safe(&pdev->dev); - else - pm_runtime_idle(&pdev->dev); - - return 0; + return ret; } static int __devexit sh_cmt_remove(struct platform_device *pdev) diff --git a/trunk/drivers/clocksource/sh_mtu2.c b/trunk/drivers/clocksource/sh_mtu2.c index c5eea858054a..d9b76ca64a61 100644 --- a/trunk/drivers/clocksource/sh_mtu2.c +++ b/trunk/drivers/clocksource/sh_mtu2.c @@ -32,7 +32,6 @@ #include #include #include -#include struct sh_mtu2_priv { void __iomem *mapbase; @@ -124,9 +123,6 @@ static int sh_mtu2_enable(struct sh_mtu2_priv *p) { int ret; - pm_runtime_get_sync(&p->pdev->dev); - dev_pm_syscore_device(&p->pdev->dev, true); - /* enable clock */ ret = clk_enable(p->clk); if (ret) { @@ -161,9 +157,6 @@ static void sh_mtu2_disable(struct sh_mtu2_priv *p) /* stop clock */ clk_disable(p->clk); - - dev_pm_syscore_device(&p->pdev->dev, false); - pm_runtime_put(&p->pdev->dev); } static irqreturn_t sh_mtu2_interrupt(int irq, void *dev_id) @@ -215,16 +208,6 @@ static void sh_mtu2_clock_event_mode(enum clock_event_mode mode, } } -static void sh_mtu2_clock_event_suspend(struct clock_event_device *ced) -{ - pm_genpd_syscore_poweroff(&ced_to_sh_mtu2(ced)->pdev->dev); -} - -static void sh_mtu2_clock_event_resume(struct clock_event_device *ced) -{ - pm_genpd_syscore_poweron(&ced_to_sh_mtu2(ced)->pdev->dev); -} - static void sh_mtu2_register_clockevent(struct sh_mtu2_priv *p, char *name, unsigned long rating) { @@ -238,8 +221,6 @@ static void sh_mtu2_register_clockevent(struct sh_mtu2_priv *p, ced->rating = rating; ced->cpumask = cpumask_of(0); ced->set_mode = sh_mtu2_clock_event_mode; - ced->suspend = sh_mtu2_clock_event_suspend; - ced->resume = sh_mtu2_clock_event_resume; dev_info(&p->pdev->dev, "used for clock events\n"); clockevents_register_device(ced); @@ -324,17 +305,14 @@ static int sh_mtu2_setup(struct sh_mtu2_priv *p, struct platform_device *pdev) static int __devinit sh_mtu2_probe(struct platform_device *pdev) { struct sh_mtu2_priv *p = platform_get_drvdata(pdev); - struct sh_timer_config *cfg = pdev->dev.platform_data; int ret; - if (!is_early_platform_device(pdev)) { - pm_runtime_set_active(&pdev->dev); - pm_runtime_enable(&pdev->dev); - } + if (!is_early_platform_device(pdev)) + pm_genpd_dev_always_on(&pdev->dev, true); if (p) { dev_info(&pdev->dev, "kept as earlytimer\n"); - goto out; + return 0; } p = kmalloc(sizeof(*p), GFP_KERNEL); @@ -347,19 +325,8 @@ static int __devinit sh_mtu2_probe(struct platform_device *pdev) if (ret) { kfree(p); platform_set_drvdata(pdev, NULL); - pm_runtime_idle(&pdev->dev); - return ret; } - if (is_early_platform_device(pdev)) - return 0; - - out: - if (cfg->clockevent_rating) - pm_runtime_irq_safe(&pdev->dev); - else - pm_runtime_idle(&pdev->dev); - - return 0; + return ret; } static int __devexit sh_mtu2_remove(struct platform_device *pdev) diff --git a/trunk/drivers/clocksource/sh_tmu.c b/trunk/drivers/clocksource/sh_tmu.c index 0cc4add88279..c1b51d49d106 100644 --- a/trunk/drivers/clocksource/sh_tmu.c +++ b/trunk/drivers/clocksource/sh_tmu.c @@ -33,7 +33,6 @@ #include #include #include -#include struct sh_tmu_priv { void __iomem *mapbase; @@ -44,8 +43,6 @@ struct sh_tmu_priv { unsigned long periodic; struct clock_event_device ced; struct clocksource cs; - bool cs_enabled; - unsigned int enable_count; }; static DEFINE_RAW_SPINLOCK(sh_tmu_lock); @@ -110,7 +107,7 @@ static void sh_tmu_start_stop_ch(struct sh_tmu_priv *p, int start) raw_spin_unlock_irqrestore(&sh_tmu_lock, flags); } -static int __sh_tmu_enable(struct sh_tmu_priv *p) +static int sh_tmu_enable(struct sh_tmu_priv *p) { int ret; @@ -138,18 +135,7 @@ static int __sh_tmu_enable(struct sh_tmu_priv *p) return 0; } -static int sh_tmu_enable(struct sh_tmu_priv *p) -{ - if (p->enable_count++ > 0) - return 0; - - pm_runtime_get_sync(&p->pdev->dev); - dev_pm_syscore_device(&p->pdev->dev, true); - - return __sh_tmu_enable(p); -} - -static void __sh_tmu_disable(struct sh_tmu_priv *p) +static void sh_tmu_disable(struct sh_tmu_priv *p) { /* disable channel */ sh_tmu_start_stop_ch(p, 0); @@ -161,20 +147,6 @@ static void __sh_tmu_disable(struct sh_tmu_priv *p) clk_disable(p->clk); } -static void sh_tmu_disable(struct sh_tmu_priv *p) -{ - if (WARN_ON(p->enable_count == 0)) - return; - - if (--p->enable_count > 0) - return; - - __sh_tmu_disable(p); - - dev_pm_syscore_device(&p->pdev->dev, false); - pm_runtime_put(&p->pdev->dev); -} - static void sh_tmu_set_next(struct sh_tmu_priv *p, unsigned long delta, int periodic) { @@ -231,53 +203,15 @@ static int sh_tmu_clocksource_enable(struct clocksource *cs) struct sh_tmu_priv *p = cs_to_sh_tmu(cs); int ret; - if (WARN_ON(p->cs_enabled)) - return 0; - ret = sh_tmu_enable(p); - if (!ret) { + if (!ret) __clocksource_updatefreq_hz(cs, p->rate); - p->cs_enabled = true; - } - return ret; } static void sh_tmu_clocksource_disable(struct clocksource *cs) { - struct sh_tmu_priv *p = cs_to_sh_tmu(cs); - - if (WARN_ON(!p->cs_enabled)) - return; - - sh_tmu_disable(p); - p->cs_enabled = false; -} - -static void sh_tmu_clocksource_suspend(struct clocksource *cs) -{ - struct sh_tmu_priv *p = cs_to_sh_tmu(cs); - - if (!p->cs_enabled) - return; - - if (--p->enable_count == 0) { - __sh_tmu_disable(p); - pm_genpd_syscore_poweroff(&p->pdev->dev); - } -} - -static void sh_tmu_clocksource_resume(struct clocksource *cs) -{ - struct sh_tmu_priv *p = cs_to_sh_tmu(cs); - - if (!p->cs_enabled) - return; - - if (p->enable_count++ == 0) { - pm_genpd_syscore_poweron(&p->pdev->dev); - __sh_tmu_enable(p); - } + sh_tmu_disable(cs_to_sh_tmu(cs)); } static int sh_tmu_register_clocksource(struct sh_tmu_priv *p, @@ -290,8 +224,6 @@ static int sh_tmu_register_clocksource(struct sh_tmu_priv *p, cs->read = sh_tmu_clocksource_read; cs->enable = sh_tmu_clocksource_enable; cs->disable = sh_tmu_clocksource_disable; - cs->suspend = sh_tmu_clocksource_suspend; - cs->resume = sh_tmu_clocksource_resume; cs->mask = CLOCKSOURCE_MASK(32); cs->flags = CLOCK_SOURCE_IS_CONTINUOUS; @@ -369,16 +301,6 @@ static int sh_tmu_clock_event_next(unsigned long delta, return 0; } -static void sh_tmu_clock_event_suspend(struct clock_event_device *ced) -{ - pm_genpd_syscore_poweroff(&ced_to_sh_tmu(ced)->pdev->dev); -} - -static void sh_tmu_clock_event_resume(struct clock_event_device *ced) -{ - pm_genpd_syscore_poweron(&ced_to_sh_tmu(ced)->pdev->dev); -} - static void sh_tmu_register_clockevent(struct sh_tmu_priv *p, char *name, unsigned long rating) { @@ -394,8 +316,6 @@ static void sh_tmu_register_clockevent(struct sh_tmu_priv *p, ced->cpumask = cpumask_of(0); ced->set_next_event = sh_tmu_clock_event_next; ced->set_mode = sh_tmu_clock_event_mode; - ced->suspend = sh_tmu_clock_event_suspend; - ced->resume = sh_tmu_clock_event_resume; dev_info(&p->pdev->dev, "used for clock events\n"); @@ -472,8 +392,6 @@ static int sh_tmu_setup(struct sh_tmu_priv *p, struct platform_device *pdev) ret = PTR_ERR(p->clk); goto err1; } - p->cs_enabled = false; - p->enable_count = 0; return sh_tmu_register(p, (char *)dev_name(&p->pdev->dev), cfg->clockevent_rating, @@ -487,17 +405,14 @@ static int sh_tmu_setup(struct sh_tmu_priv *p, struct platform_device *pdev) static int __devinit sh_tmu_probe(struct platform_device *pdev) { struct sh_tmu_priv *p = platform_get_drvdata(pdev); - struct sh_timer_config *cfg = pdev->dev.platform_data; int ret; - if (!is_early_platform_device(pdev)) { - pm_runtime_set_active(&pdev->dev); - pm_runtime_enable(&pdev->dev); - } + if (!is_early_platform_device(pdev)) + pm_genpd_dev_always_on(&pdev->dev, true); if (p) { dev_info(&pdev->dev, "kept as earlytimer\n"); - goto out; + return 0; } p = kmalloc(sizeof(*p), GFP_KERNEL); @@ -510,19 +425,8 @@ static int __devinit sh_tmu_probe(struct platform_device *pdev) if (ret) { kfree(p); platform_set_drvdata(pdev, NULL); - pm_runtime_idle(&pdev->dev); - return ret; } - if (is_early_platform_device(pdev)) - return 0; - - out: - if (cfg->clockevent_rating || cfg->clocksource_rating) - pm_runtime_irq_safe(&pdev->dev); - else - pm_runtime_idle(&pdev->dev); - - return 0; + return ret; } static int __devexit sh_tmu_remove(struct platform_device *pdev) diff --git a/trunk/drivers/cpufreq/Kconfig b/trunk/drivers/cpufreq/Kconfig index ea512f47b789..e24a2a1b6666 100644 --- a/trunk/drivers/cpufreq/Kconfig +++ b/trunk/drivers/cpufreq/Kconfig @@ -179,17 +179,6 @@ config CPU_FREQ_GOV_CONSERVATIVE If in doubt, say N. -config GENERIC_CPUFREQ_CPU0 - bool "Generic CPU0 cpufreq driver" - depends on HAVE_CLK && REGULATOR && PM_OPP && OF - select CPU_FREQ_TABLE - help - This adds a generic cpufreq driver for CPU0 frequency management. - It supports both uniprocessor (UP) and symmetric multiprocessor (SMP) - systems which share clock and voltage across all CPUs. - - If in doubt, say N. - menu "x86 CPU frequency scaling drivers" depends on X86 source "drivers/cpufreq/Kconfig.x86" diff --git a/trunk/drivers/cpufreq/Kconfig.x86 b/trunk/drivers/cpufreq/Kconfig.x86 index 934854ae5eb4..78ff7ee48951 100644 --- a/trunk/drivers/cpufreq/Kconfig.x86 +++ b/trunk/drivers/cpufreq/Kconfig.x86 @@ -23,8 +23,7 @@ config X86_ACPI_CPUFREQ help This driver adds a CPUFreq driver which utilizes the ACPI Processor Performance States. - This driver also supports Intel Enhanced Speedstep and newer - AMD CPUs. + This driver also supports Intel Enhanced Speedstep. To compile this driver as a module, choose M here: the module will be called acpi-cpufreq. @@ -33,18 +32,6 @@ config X86_ACPI_CPUFREQ If in doubt, say N. -config X86_ACPI_CPUFREQ_CPB - default y - bool "Legacy cpb sysfs knob support for AMD CPUs" - depends on X86_ACPI_CPUFREQ && CPU_SUP_AMD - help - The powernow-k8 driver used to provide a sysfs knob called "cpb" - to disable the Core Performance Boosting feature of AMD CPUs. This - file has now been superseeded by the more generic "boost" entry. - - By enabling this option the acpi_cpufreq driver provides the old - entry in addition to the new boost ones, for compatibility reasons. - config ELAN_CPUFREQ tristate "AMD Elan SC400 and SC410" select CPU_FREQ_TABLE @@ -108,8 +95,7 @@ config X86_POWERNOW_K8 select CPU_FREQ_TABLE depends on ACPI && ACPI_PROCESSOR help - This adds the CPUFreq driver for K8/early Opteron/Athlon64 processors. - Support for K10 and newer processors is now in acpi-cpufreq. + This adds the CPUFreq driver for K8/K10 Opteron/Athlon64 processors. To compile this driver as a module, choose M here: the module will be called powernow-k8. diff --git a/trunk/drivers/cpufreq/Makefile b/trunk/drivers/cpufreq/Makefile index 1bc90e1306d8..9531fc2eda22 100644 --- a/trunk/drivers/cpufreq/Makefile +++ b/trunk/drivers/cpufreq/Makefile @@ -13,15 +13,13 @@ obj-$(CONFIG_CPU_FREQ_GOV_CONSERVATIVE) += cpufreq_conservative.o # CPUfreq cross-arch helpers obj-$(CONFIG_CPU_FREQ_TABLE) += freq_table.o -obj-$(CONFIG_GENERIC_CPUFREQ_CPU0) += cpufreq-cpu0.o - ################################################################################## # x86 drivers. # Link order matters. K8 is preferred to ACPI because of firmware bugs in early # K8 systems. ACPI is preferred to all other hardware-specific drivers. # speedstep-* is preferred over p4-clockmod. -obj-$(CONFIG_X86_POWERNOW_K8) += powernow-k8.o +obj-$(CONFIG_X86_POWERNOW_K8) += powernow-k8.o mperf.o obj-$(CONFIG_X86_ACPI_CPUFREQ) += acpi-cpufreq.o mperf.o obj-$(CONFIG_X86_PCC_CPUFREQ) += pcc-cpufreq.o obj-$(CONFIG_X86_POWERNOW_K6) += powernow-k6.o diff --git a/trunk/drivers/cpufreq/acpi-cpufreq.c b/trunk/drivers/cpufreq/acpi-cpufreq.c index 0d048f6a2b23..56c6c6b4eb4d 100644 --- a/trunk/drivers/cpufreq/acpi-cpufreq.c +++ b/trunk/drivers/cpufreq/acpi-cpufreq.c @@ -51,19 +51,13 @@ MODULE_AUTHOR("Paul Diefenbaugh, Dominik Brodowski"); MODULE_DESCRIPTION("ACPI Processor P-States Driver"); MODULE_LICENSE("GPL"); -#define PFX "acpi-cpufreq: " - enum { UNDEFINED_CAPABLE = 0, SYSTEM_INTEL_MSR_CAPABLE, - SYSTEM_AMD_MSR_CAPABLE, SYSTEM_IO_CAPABLE, }; #define INTEL_MSR_RANGE (0xffff) -#define AMD_MSR_RANGE (0x7) - -#define MSR_K7_HWCR_CPB_DIS (1ULL << 25) struct acpi_cpufreq_data { struct acpi_processor_performance *acpi_data; @@ -80,116 +74,6 @@ static struct acpi_processor_performance __percpu *acpi_perf_data; static struct cpufreq_driver acpi_cpufreq_driver; static unsigned int acpi_pstate_strict; -static bool boost_enabled, boost_supported; -static struct msr __percpu *msrs; - -static bool boost_state(unsigned int cpu) -{ - u32 lo, hi; - u64 msr; - - switch (boot_cpu_data.x86_vendor) { - case X86_VENDOR_INTEL: - rdmsr_on_cpu(cpu, MSR_IA32_MISC_ENABLE, &lo, &hi); - msr = lo | ((u64)hi << 32); - return !(msr & MSR_IA32_MISC_ENABLE_TURBO_DISABLE); - case X86_VENDOR_AMD: - rdmsr_on_cpu(cpu, MSR_K7_HWCR, &lo, &hi); - msr = lo | ((u64)hi << 32); - return !(msr & MSR_K7_HWCR_CPB_DIS); - } - return false; -} - -static void boost_set_msrs(bool enable, const struct cpumask *cpumask) -{ - u32 cpu; - u32 msr_addr; - u64 msr_mask; - - switch (boot_cpu_data.x86_vendor) { - case X86_VENDOR_INTEL: - msr_addr = MSR_IA32_MISC_ENABLE; - msr_mask = MSR_IA32_MISC_ENABLE_TURBO_DISABLE; - break; - case X86_VENDOR_AMD: - msr_addr = MSR_K7_HWCR; - msr_mask = MSR_K7_HWCR_CPB_DIS; - break; - default: - return; - } - - rdmsr_on_cpus(cpumask, msr_addr, msrs); - - for_each_cpu(cpu, cpumask) { - struct msr *reg = per_cpu_ptr(msrs, cpu); - if (enable) - reg->q &= ~msr_mask; - else - reg->q |= msr_mask; - } - - wrmsr_on_cpus(cpumask, msr_addr, msrs); -} - -static ssize_t _store_boost(const char *buf, size_t count) -{ - int ret; - unsigned long val = 0; - - if (!boost_supported) - return -EINVAL; - - ret = kstrtoul(buf, 10, &val); - if (ret || (val > 1)) - return -EINVAL; - - if ((val && boost_enabled) || (!val && !boost_enabled)) - return count; - - get_online_cpus(); - - boost_set_msrs(val, cpu_online_mask); - - put_online_cpus(); - - boost_enabled = val; - pr_debug("Core Boosting %sabled.\n", val ? "en" : "dis"); - - return count; -} - -static ssize_t store_global_boost(struct kobject *kobj, struct attribute *attr, - const char *buf, size_t count) -{ - return _store_boost(buf, count); -} - -static ssize_t show_global_boost(struct kobject *kobj, - struct attribute *attr, char *buf) -{ - return sprintf(buf, "%u\n", boost_enabled); -} - -static struct global_attr global_boost = __ATTR(boost, 0644, - show_global_boost, - store_global_boost); - -#ifdef CONFIG_X86_ACPI_CPUFREQ_CPB -static ssize_t store_cpb(struct cpufreq_policy *policy, const char *buf, - size_t count) -{ - return _store_boost(buf, count); -} - -static ssize_t show_cpb(struct cpufreq_policy *policy, char *buf) -{ - return sprintf(buf, "%u\n", boost_enabled); -} - -static struct freq_attr cpb = __ATTR(cpb, 0644, show_cpb, store_cpb); -#endif static int check_est_cpu(unsigned int cpuid) { @@ -198,13 +82,6 @@ static int check_est_cpu(unsigned int cpuid) return cpu_has(cpu, X86_FEATURE_EST); } -static int check_amd_hwpstate_cpu(unsigned int cpuid) -{ - struct cpuinfo_x86 *cpu = &cpu_data(cpuid); - - return cpu_has(cpu, X86_FEATURE_HW_PSTATE); -} - static unsigned extract_io(u32 value, struct acpi_cpufreq_data *data) { struct acpi_processor_performance *perf; @@ -224,11 +101,7 @@ static unsigned extract_msr(u32 msr, struct acpi_cpufreq_data *data) int i; struct acpi_processor_performance *perf; - if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD) - msr &= AMD_MSR_RANGE; - else - msr &= INTEL_MSR_RANGE; - + msr &= INTEL_MSR_RANGE; perf = data->acpi_data; for (i = 0; data->freq_table[i].frequency != CPUFREQ_TABLE_END; i++) { @@ -242,7 +115,6 @@ static unsigned extract_freq(u32 val, struct acpi_cpufreq_data *data) { switch (data->cpu_feature) { case SYSTEM_INTEL_MSR_CAPABLE: - case SYSTEM_AMD_MSR_CAPABLE: return extract_msr(val, data); case SYSTEM_IO_CAPABLE: return extract_io(val, data); @@ -278,7 +150,6 @@ static void do_drv_read(void *_cmd) switch (cmd->type) { case SYSTEM_INTEL_MSR_CAPABLE: - case SYSTEM_AMD_MSR_CAPABLE: rdmsr(cmd->addr.msr.reg, cmd->val, h); break; case SYSTEM_IO_CAPABLE: @@ -303,9 +174,6 @@ static void do_drv_write(void *_cmd) lo = (lo & ~INTEL_MSR_RANGE) | (cmd->val & INTEL_MSR_RANGE); wrmsr(cmd->addr.msr.reg, lo, hi); break; - case SYSTEM_AMD_MSR_CAPABLE: - wrmsr(cmd->addr.msr.reg, cmd->val, 0); - break; case SYSTEM_IO_CAPABLE: acpi_os_write_port((acpi_io_address)cmd->addr.io.port, cmd->val, @@ -349,10 +217,6 @@ static u32 get_cur_val(const struct cpumask *mask) cmd.type = SYSTEM_INTEL_MSR_CAPABLE; cmd.addr.msr.reg = MSR_IA32_PERF_STATUS; break; - case SYSTEM_AMD_MSR_CAPABLE: - cmd.type = SYSTEM_AMD_MSR_CAPABLE; - cmd.addr.msr.reg = MSR_AMD_PERF_STATUS; - break; case SYSTEM_IO_CAPABLE: cmd.type = SYSTEM_IO_CAPABLE; perf = per_cpu(acfreq_data, cpumask_first(mask))->acpi_data; @@ -462,11 +326,6 @@ static int acpi_cpufreq_target(struct cpufreq_policy *policy, cmd.addr.msr.reg = MSR_IA32_PERF_CTL; cmd.val = (u32) perf->states[next_perf_state].control; break; - case SYSTEM_AMD_MSR_CAPABLE: - cmd.type = SYSTEM_AMD_MSR_CAPABLE; - cmd.addr.msr.reg = MSR_AMD_PERF_CTL; - cmd.val = (u32) perf->states[next_perf_state].control; - break; case SYSTEM_IO_CAPABLE: cmd.type = SYSTEM_IO_CAPABLE; cmd.addr.io.port = perf->control_register.address; @@ -560,44 +419,6 @@ static void free_acpi_perf_data(void) free_percpu(acpi_perf_data); } -static int boost_notify(struct notifier_block *nb, unsigned long action, - void *hcpu) -{ - unsigned cpu = (long)hcpu; - const struct cpumask *cpumask; - - cpumask = get_cpu_mask(cpu); - - /* - * Clear the boost-disable bit on the CPU_DOWN path so that - * this cpu cannot block the remaining ones from boosting. On - * the CPU_UP path we simply keep the boost-disable flag in - * sync with the current global state. - */ - - switch (action) { - case CPU_UP_PREPARE: - case CPU_UP_PREPARE_FROZEN: - boost_set_msrs(boost_enabled, cpumask); - break; - - case CPU_DOWN_PREPARE: - case CPU_DOWN_PREPARE_FROZEN: - boost_set_msrs(1, cpumask); - break; - - default: - break; - } - - return NOTIFY_OK; -} - - -static struct notifier_block boost_nb = { - .notifier_call = boost_notify, -}; - /* * acpi_cpufreq_early_init - initialize ACPI P-States library * @@ -738,14 +559,6 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy) policy->shared_type = CPUFREQ_SHARED_TYPE_ALL; cpumask_copy(policy->cpus, cpu_core_mask(cpu)); } - - if (check_amd_hwpstate_cpu(cpu) && !acpi_pstate_strict) { - cpumask_clear(policy->cpus); - cpumask_set_cpu(cpu, policy->cpus); - cpumask_copy(policy->related_cpus, cpu_sibling_mask(cpu)); - policy->shared_type = CPUFREQ_SHARED_TYPE_HW; - pr_info_once(PFX "overriding BIOS provided _PSD data\n"); - } #endif /* capability check */ @@ -767,16 +580,12 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy) break; case ACPI_ADR_SPACE_FIXED_HARDWARE: pr_debug("HARDWARE addr space\n"); - if (check_est_cpu(cpu)) { - data->cpu_feature = SYSTEM_INTEL_MSR_CAPABLE; - break; - } - if (check_amd_hwpstate_cpu(cpu)) { - data->cpu_feature = SYSTEM_AMD_MSR_CAPABLE; - break; + if (!check_est_cpu(cpu)) { + result = -ENODEV; + goto err_unreg; } - result = -ENODEV; - goto err_unreg; + data->cpu_feature = SYSTEM_INTEL_MSR_CAPABLE; + break; default: pr_debug("Unknown addr space %d\n", (u32) (perf->control_register.space_id)); @@ -909,7 +718,6 @@ static int acpi_cpufreq_resume(struct cpufreq_policy *policy) static struct freq_attr *acpi_cpufreq_attr[] = { &cpufreq_freq_attr_scaling_available_freqs, - NULL, /* this is a placeholder for cpb, do not remove */ NULL, }; @@ -925,49 +733,6 @@ static struct cpufreq_driver acpi_cpufreq_driver = { .attr = acpi_cpufreq_attr, }; -static void __init acpi_cpufreq_boost_init(void) -{ - if (boot_cpu_has(X86_FEATURE_CPB) || boot_cpu_has(X86_FEATURE_IDA)) { - msrs = msrs_alloc(); - - if (!msrs) - return; - - boost_supported = true; - boost_enabled = boost_state(0); - - get_online_cpus(); - - /* Force all MSRs to the same value */ - boost_set_msrs(boost_enabled, cpu_online_mask); - - register_cpu_notifier(&boost_nb); - - put_online_cpus(); - } else - global_boost.attr.mode = 0444; - - /* We create the boost file in any case, though for systems without - * hardware support it will be read-only and hardwired to return 0. - */ - if (sysfs_create_file(cpufreq_global_kobject, &(global_boost.attr))) - pr_warn(PFX "could not register global boost sysfs file\n"); - else - pr_debug("registered global boost sysfs file\n"); -} - -static void __exit acpi_cpufreq_boost_exit(void) -{ - sysfs_remove_file(cpufreq_global_kobject, &(global_boost.attr)); - - if (msrs) { - unregister_cpu_notifier(&boost_nb); - - msrs_free(msrs); - msrs = NULL; - } -} - static int __init acpi_cpufreq_init(void) { int ret; @@ -981,32 +746,9 @@ static int __init acpi_cpufreq_init(void) if (ret) return ret; -#ifdef CONFIG_X86_ACPI_CPUFREQ_CPB - /* this is a sysfs file with a strange name and an even stranger - * semantic - per CPU instantiation, but system global effect. - * Lets enable it only on AMD CPUs for compatibility reasons and - * only if configured. This is considered legacy code, which - * will probably be removed at some point in the future. - */ - if (check_amd_hwpstate_cpu(0)) { - struct freq_attr **iter; - - pr_debug("adding sysfs entry for cpb\n"); - - for (iter = acpi_cpufreq_attr; *iter != NULL; iter++) - ; - - /* make sure there is a terminator behind it */ - if (iter[1] == NULL) - *iter = &cpb; - } -#endif - ret = cpufreq_register_driver(&acpi_cpufreq_driver); if (ret) free_acpi_perf_data(); - else - acpi_cpufreq_boost_init(); return ret; } @@ -1015,8 +757,6 @@ static void __exit acpi_cpufreq_exit(void) { pr_debug("acpi_cpufreq_exit\n"); - acpi_cpufreq_boost_exit(); - cpufreq_unregister_driver(&acpi_cpufreq_driver); free_acpi_perf_data(); diff --git a/trunk/drivers/cpufreq/cpufreq-cpu0.c b/trunk/drivers/cpufreq/cpufreq-cpu0.c deleted file mode 100644 index e9158278c71d..000000000000 --- a/trunk/drivers/cpufreq/cpufreq-cpu0.c +++ /dev/null @@ -1,269 +0,0 @@ -/* - * Copyright (C) 2012 Freescale Semiconductor, Inc. - * - * The OPP code in function cpu0_set_target() is reused from - * drivers/cpufreq/omap-cpufreq.c - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - */ - -#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt - -#include -#include -#include -#include -#include -#include -#include -#include -#include - -static unsigned int transition_latency; -static unsigned int voltage_tolerance; /* in percentage */ - -static struct device *cpu_dev; -static struct clk *cpu_clk; -static struct regulator *cpu_reg; -static struct cpufreq_frequency_table *freq_table; - -static int cpu0_verify_speed(struct cpufreq_policy *policy) -{ - return cpufreq_frequency_table_verify(policy, freq_table); -} - -static unsigned int cpu0_get_speed(unsigned int cpu) -{ - return clk_get_rate(cpu_clk) / 1000; -} - -static int cpu0_set_target(struct cpufreq_policy *policy, - unsigned int target_freq, unsigned int relation) -{ - struct cpufreq_freqs freqs; - struct opp *opp; - unsigned long freq_Hz, volt = 0, volt_old = 0, tol = 0; - unsigned int index, cpu; - int ret; - - ret = cpufreq_frequency_table_target(policy, freq_table, target_freq, - relation, &index); - if (ret) { - pr_err("failed to match target freqency %d: %d\n", - target_freq, ret); - return ret; - } - - freq_Hz = clk_round_rate(cpu_clk, freq_table[index].frequency * 1000); - if (freq_Hz < 0) - freq_Hz = freq_table[index].frequency * 1000; - freqs.new = freq_Hz / 1000; - freqs.old = clk_get_rate(cpu_clk) / 1000; - - if (freqs.old == freqs.new) - return 0; - - for_each_online_cpu(cpu) { - freqs.cpu = cpu; - cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); - } - - if (cpu_reg) { - opp = opp_find_freq_ceil(cpu_dev, &freq_Hz); - if (IS_ERR(opp)) { - pr_err("failed to find OPP for %ld\n", freq_Hz); - return PTR_ERR(opp); - } - volt = opp_get_voltage(opp); - tol = volt * voltage_tolerance / 100; - volt_old = regulator_get_voltage(cpu_reg); - } - - pr_debug("%u MHz, %ld mV --> %u MHz, %ld mV\n", - freqs.old / 1000, volt_old ? volt_old / 1000 : -1, - freqs.new / 1000, volt ? volt / 1000 : -1); - - /* scaling up? scale voltage before frequency */ - if (cpu_reg && freqs.new > freqs.old) { - ret = regulator_set_voltage_tol(cpu_reg, volt, tol); - if (ret) { - pr_err("failed to scale voltage up: %d\n", ret); - freqs.new = freqs.old; - return ret; - } - } - - ret = clk_set_rate(cpu_clk, freqs.new * 1000); - if (ret) { - pr_err("failed to set clock rate: %d\n", ret); - if (cpu_reg) - regulator_set_voltage_tol(cpu_reg, volt_old, tol); - return ret; - } - - /* scaling down? scale voltage after frequency */ - if (cpu_reg && freqs.new < freqs.old) { - ret = regulator_set_voltage_tol(cpu_reg, volt, tol); - if (ret) { - pr_err("failed to scale voltage down: %d\n", ret); - clk_set_rate(cpu_clk, freqs.old * 1000); - freqs.new = freqs.old; - return ret; - } - } - - for_each_online_cpu(cpu) { - freqs.cpu = cpu; - cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); - } - - return 0; -} - -static int cpu0_cpufreq_init(struct cpufreq_policy *policy) -{ - int ret; - - if (policy->cpu != 0) - return -EINVAL; - - ret = cpufreq_frequency_table_cpuinfo(policy, freq_table); - if (ret) { - pr_err("invalid frequency table: %d\n", ret); - return ret; - } - - policy->cpuinfo.transition_latency = transition_latency; - policy->cur = clk_get_rate(cpu_clk) / 1000; - - /* - * The driver only supports the SMP configuartion where all processors - * share the clock and voltage and clock. Use cpufreq affected_cpus - * interface to have all CPUs scaled together. - */ - policy->shared_type = CPUFREQ_SHARED_TYPE_ANY; - cpumask_setall(policy->cpus); - - cpufreq_frequency_table_get_attr(freq_table, policy->cpu); - - return 0; -} - -static int cpu0_cpufreq_exit(struct cpufreq_policy *policy) -{ - cpufreq_frequency_table_put_attr(policy->cpu); - - return 0; -} - -static struct freq_attr *cpu0_cpufreq_attr[] = { - &cpufreq_freq_attr_scaling_available_freqs, - NULL, -}; - -static struct cpufreq_driver cpu0_cpufreq_driver = { - .flags = CPUFREQ_STICKY, - .verify = cpu0_verify_speed, - .target = cpu0_set_target, - .get = cpu0_get_speed, - .init = cpu0_cpufreq_init, - .exit = cpu0_cpufreq_exit, - .name = "generic_cpu0", - .attr = cpu0_cpufreq_attr, -}; - -static int __devinit cpu0_cpufreq_driver_init(void) -{ - struct device_node *np; - int ret; - - np = of_find_node_by_path("/cpus/cpu@0"); - if (!np) { - pr_err("failed to find cpu0 node\n"); - return -ENOENT; - } - - cpu_dev = get_cpu_device(0); - if (!cpu_dev) { - pr_err("failed to get cpu0 device\n"); - ret = -ENODEV; - goto out_put_node; - } - - cpu_dev->of_node = np; - - cpu_clk = clk_get(cpu_dev, NULL); - if (IS_ERR(cpu_clk)) { - ret = PTR_ERR(cpu_clk); - pr_err("failed to get cpu0 clock: %d\n", ret); - goto out_put_node; - } - - cpu_reg = regulator_get(cpu_dev, "cpu0"); - if (IS_ERR(cpu_reg)) { - pr_warn("failed to get cpu0 regulator\n"); - cpu_reg = NULL; - } - - ret = of_init_opp_table(cpu_dev); - if (ret) { - pr_err("failed to init OPP table: %d\n", ret); - goto out_put_node; - } - - ret = opp_init_cpufreq_table(cpu_dev, &freq_table); - if (ret) { - pr_err("failed to init cpufreq table: %d\n", ret); - goto out_put_node; - } - - of_property_read_u32(np, "voltage-tolerance", &voltage_tolerance); - - if (of_property_read_u32(np, "clock-latency", &transition_latency)) - transition_latency = CPUFREQ_ETERNAL; - - if (cpu_reg) { - struct opp *opp; - unsigned long min_uV, max_uV; - int i; - - /* - * OPP is maintained in order of increasing frequency, and - * freq_table initialised from OPP is therefore sorted in the - * same order. - */ - for (i = 0; freq_table[i].frequency != CPUFREQ_TABLE_END; i++) - ; - opp = opp_find_freq_exact(cpu_dev, - freq_table[0].frequency * 1000, true); - min_uV = opp_get_voltage(opp); - opp = opp_find_freq_exact(cpu_dev, - freq_table[i-1].frequency * 1000, true); - max_uV = opp_get_voltage(opp); - ret = regulator_set_voltage_time(cpu_reg, min_uV, max_uV); - if (ret > 0) - transition_latency += ret * 1000; - } - - ret = cpufreq_register_driver(&cpu0_cpufreq_driver); - if (ret) { - pr_err("failed register driver: %d\n", ret); - goto out_free_table; - } - - of_node_put(np); - return 0; - -out_free_table: - opp_free_cpufreq_table(cpu_dev, &freq_table); -out_put_node: - of_node_put(np); - return ret; -} -late_initcall(cpu0_cpufreq_driver_init); - -MODULE_AUTHOR("Shawn Guo "); -MODULE_DESCRIPTION("Generic CPU0 cpufreq driver"); -MODULE_LICENSE("GPL"); diff --git a/trunk/drivers/cpufreq/cpufreq_conservative.c b/trunk/drivers/cpufreq/cpufreq_conservative.c index b75dc2c2f8d3..235a340e81f2 100644 --- a/trunk/drivers/cpufreq/cpufreq_conservative.c +++ b/trunk/drivers/cpufreq/cpufreq_conservative.c @@ -504,7 +504,6 @@ static int cpufreq_governor_dbs(struct cpufreq_policy *policy, j_dbs_info->prev_cpu_nice = kcpustat_cpu(j).cpustat[CPUTIME_NICE]; } - this_dbs_info->cpu = cpu; this_dbs_info->down_skip = 0; this_dbs_info->requested_freq = policy->cur; @@ -584,7 +583,6 @@ static int cpufreq_governor_dbs(struct cpufreq_policy *policy, __cpufreq_driver_target( this_dbs_info->cur_policy, policy->min, CPUFREQ_RELATION_L); - dbs_check_cpu(this_dbs_info); mutex_unlock(&this_dbs_info->timer_mutex); break; diff --git a/trunk/drivers/cpufreq/cpufreq_ondemand.c b/trunk/drivers/cpufreq/cpufreq_ondemand.c index 9479fb33c30f..836e9b062e5e 100644 --- a/trunk/drivers/cpufreq/cpufreq_ondemand.c +++ b/trunk/drivers/cpufreq/cpufreq_ondemand.c @@ -761,7 +761,6 @@ static int cpufreq_governor_dbs(struct cpufreq_policy *policy, else if (policy->min > this_dbs_info->cur_policy->cur) __cpufreq_driver_target(this_dbs_info->cur_policy, policy->min, CPUFREQ_RELATION_L); - dbs_check_cpu(this_dbs_info); mutex_unlock(&this_dbs_info->timer_mutex); break; } diff --git a/trunk/drivers/cpufreq/longhaul.h b/trunk/drivers/cpufreq/longhaul.h index e2dc436099d1..cbf48fbca881 100644 --- a/trunk/drivers/cpufreq/longhaul.h +++ b/trunk/drivers/cpufreq/longhaul.h @@ -56,7 +56,7 @@ union msr_longhaul { /* * VIA C3 Samuel 1 & Samuel 2 (stepping 0) */ -static const int __cpuinitconst samuel1_mults[16] = { +static const int __cpuinitdata samuel1_mults[16] = { -1, /* 0000 -> RESERVED */ 30, /* 0001 -> 3.0x */ 40, /* 0010 -> 4.0x */ @@ -75,7 +75,7 @@ static const int __cpuinitconst samuel1_mults[16] = { -1, /* 1111 -> RESERVED */ }; -static const int __cpuinitconst samuel1_eblcr[16] = { +static const int __cpuinitdata samuel1_eblcr[16] = { 50, /* 0000 -> RESERVED */ 30, /* 0001 -> 3.0x */ 40, /* 0010 -> 4.0x */ @@ -97,7 +97,7 @@ static const int __cpuinitconst samuel1_eblcr[16] = { /* * VIA C3 Samuel2 Stepping 1->15 */ -static const int __cpuinitconst samuel2_eblcr[16] = { +static const int __cpuinitdata samuel2_eblcr[16] = { 50, /* 0000 -> 5.0x */ 30, /* 0001 -> 3.0x */ 40, /* 0010 -> 4.0x */ @@ -119,7 +119,7 @@ static const int __cpuinitconst samuel2_eblcr[16] = { /* * VIA C3 Ezra */ -static const int __cpuinitconst ezra_mults[16] = { +static const int __cpuinitdata ezra_mults[16] = { 100, /* 0000 -> 10.0x */ 30, /* 0001 -> 3.0x */ 40, /* 0010 -> 4.0x */ @@ -138,7 +138,7 @@ static const int __cpuinitconst ezra_mults[16] = { 120, /* 1111 -> 12.0x */ }; -static const int __cpuinitconst ezra_eblcr[16] = { +static const int __cpuinitdata ezra_eblcr[16] = { 50, /* 0000 -> 5.0x */ 30, /* 0001 -> 3.0x */ 40, /* 0010 -> 4.0x */ @@ -160,7 +160,7 @@ static const int __cpuinitconst ezra_eblcr[16] = { /* * VIA C3 (Ezra-T) [C5M]. */ -static const int __cpuinitconst ezrat_mults[32] = { +static const int __cpuinitdata ezrat_mults[32] = { 100, /* 0000 -> 10.0x */ 30, /* 0001 -> 3.0x */ 40, /* 0010 -> 4.0x */ @@ -196,7 +196,7 @@ static const int __cpuinitconst ezrat_mults[32] = { -1, /* 1111 -> RESERVED (12.0x) */ }; -static const int __cpuinitconst ezrat_eblcr[32] = { +static const int __cpuinitdata ezrat_eblcr[32] = { 50, /* 0000 -> 5.0x */ 30, /* 0001 -> 3.0x */ 40, /* 0010 -> 4.0x */ @@ -235,7 +235,7 @@ static const int __cpuinitconst ezrat_eblcr[32] = { /* * VIA C3 Nehemiah */ -static const int __cpuinitconst nehemiah_mults[32] = { +static const int __cpuinitdata nehemiah_mults[32] = { 100, /* 0000 -> 10.0x */ -1, /* 0001 -> 16.0x */ 40, /* 0010 -> 4.0x */ @@ -270,7 +270,7 @@ static const int __cpuinitconst nehemiah_mults[32] = { -1, /* 1111 -> 12.0x */ }; -static const int __cpuinitconst nehemiah_eblcr[32] = { +static const int __cpuinitdata nehemiah_eblcr[32] = { 50, /* 0000 -> 5.0x */ 160, /* 0001 -> 16.0x */ 40, /* 0010 -> 4.0x */ @@ -315,7 +315,7 @@ struct mV_pos { unsigned short pos; }; -static const struct mV_pos __cpuinitconst vrm85_mV[32] = { +static const struct mV_pos __cpuinitdata vrm85_mV[32] = { {1250, 8}, {1200, 6}, {1150, 4}, {1100, 2}, {1050, 0}, {1800, 30}, {1750, 28}, {1700, 26}, {1650, 24}, {1600, 22}, {1550, 20}, {1500, 18}, @@ -326,14 +326,14 @@ static const struct mV_pos __cpuinitconst vrm85_mV[32] = { {1475, 17}, {1425, 15}, {1375, 13}, {1325, 11} }; -static const unsigned char __cpuinitconst mV_vrm85[32] = { +static const unsigned char __cpuinitdata mV_vrm85[32] = { 0x04, 0x14, 0x03, 0x13, 0x02, 0x12, 0x01, 0x11, 0x00, 0x10, 0x0f, 0x1f, 0x0e, 0x1e, 0x0d, 0x1d, 0x0c, 0x1c, 0x0b, 0x1b, 0x0a, 0x1a, 0x09, 0x19, 0x08, 0x18, 0x07, 0x17, 0x06, 0x16, 0x05, 0x15 }; -static const struct mV_pos __cpuinitconst mobilevrm_mV[32] = { +static const struct mV_pos __cpuinitdata mobilevrm_mV[32] = { {1750, 31}, {1700, 30}, {1650, 29}, {1600, 28}, {1550, 27}, {1500, 26}, {1450, 25}, {1400, 24}, {1350, 23}, {1300, 22}, {1250, 21}, {1200, 20}, @@ -344,7 +344,7 @@ static const struct mV_pos __cpuinitconst mobilevrm_mV[32] = { {675, 3}, {650, 2}, {625, 1}, {600, 0} }; -static const unsigned char __cpuinitconst mV_mobilevrm[32] = { +static const unsigned char __cpuinitdata mV_mobilevrm[32] = { 0x1f, 0x1e, 0x1d, 0x1c, 0x1b, 0x1a, 0x19, 0x18, 0x17, 0x16, 0x15, 0x14, 0x13, 0x12, 0x11, 0x10, 0x0f, 0x0e, 0x0d, 0x0c, 0x0b, 0x0a, 0x09, 0x08, diff --git a/trunk/drivers/cpufreq/omap-cpufreq.c b/trunk/drivers/cpufreq/omap-cpufreq.c index 65f8e9a54975..b47034e650a5 100644 --- a/trunk/drivers/cpufreq/omap-cpufreq.c +++ b/trunk/drivers/cpufreq/omap-cpufreq.c @@ -40,6 +40,16 @@ /* OPP tolerance in percentage */ #define OPP_TOLERANCE 4 +#ifdef CONFIG_SMP +struct lpj_info { + unsigned long ref; + unsigned int freq; +}; + +static DEFINE_PER_CPU(struct lpj_info, lpj_ref); +static struct lpj_info global_lpj_ref; +#endif + static struct cpufreq_frequency_table *freq_table; static atomic_t freq_table_users = ATOMIC_INIT(0); static struct clk *mpu_clk; @@ -151,6 +161,31 @@ static int omap_target(struct cpufreq_policy *policy, } freqs.new = omap_getspeed(policy->cpu); +#ifdef CONFIG_SMP + /* + * Note that loops_per_jiffy is not updated on SMP systems in + * cpufreq driver. So, update the per-CPU loops_per_jiffy value + * on frequency transition. We need to update all dependent CPUs. + */ + for_each_cpu(i, policy->cpus) { + struct lpj_info *lpj = &per_cpu(lpj_ref, i); + if (!lpj->freq) { + lpj->ref = per_cpu(cpu_data, i).loops_per_jiffy; + lpj->freq = freqs.old; + } + + per_cpu(cpu_data, i).loops_per_jiffy = + cpufreq_scale(lpj->ref, lpj->freq, freqs.new); + } + + /* And don't forget to adjust the global one */ + if (!global_lpj_ref.freq) { + global_lpj_ref.ref = loops_per_jiffy; + global_lpj_ref.freq = freqs.old; + } + loops_per_jiffy = cpufreq_scale(global_lpj_ref.ref, global_lpj_ref.freq, + freqs.new); +#endif done: /* notifiers */ @@ -266,9 +301,9 @@ static int __init omap_cpufreq_init(void) } mpu_dev = omap_device_get_by_hwmod_name("mpu"); - if (IS_ERR(mpu_dev)) { + if (!mpu_dev) { pr_warning("%s: unable to get the mpu device\n", __func__); - return PTR_ERR(mpu_dev); + return -EINVAL; } mpu_reg = regulator_get(mpu_dev, "vcc"); diff --git a/trunk/drivers/cpufreq/powernow-k8.c b/trunk/drivers/cpufreq/powernow-k8.c index 0b19faf002ee..c0e816468e30 100644 --- a/trunk/drivers/cpufreq/powernow-k8.c +++ b/trunk/drivers/cpufreq/powernow-k8.c @@ -49,12 +49,22 @@ #define PFX "powernow-k8: " #define VERSION "version 2.20.00" #include "powernow-k8.h" +#include "mperf.h" /* serialize freq changes */ static DEFINE_MUTEX(fidvid_mutex); static DEFINE_PER_CPU(struct powernow_k8_data *, powernow_data); +static int cpu_family = CPU_OPTERON; + +/* array to map SW pstate number to acpi state */ +static u32 ps_to_as[8]; + +/* core performance boost */ +static bool cpb_capable, cpb_enabled; +static struct msr __percpu *msrs; + static struct cpufreq_driver cpufreq_amd64_driver; #ifndef CONFIG_SMP @@ -76,6 +86,12 @@ static u32 find_khz_freq_from_fid(u32 fid) return 1000 * find_freq_from_fid(fid); } +static u32 find_khz_freq_from_pstate(struct cpufreq_frequency_table *data, + u32 pstate) +{ + return data[ps_to_as[pstate]].frequency; +} + /* Return the vco fid for an input fid * * Each "low" fid has corresponding "high" fid, and you can get to "low" fids @@ -98,6 +114,9 @@ static int pending_bit_stuck(void) { u32 lo, hi; + if (cpu_family == CPU_HW_PSTATE) + return 0; + rdmsr(MSR_FIDVID_STATUS, lo, hi); return lo & MSR_S_LO_CHANGE_PENDING ? 1 : 0; } @@ -111,6 +130,20 @@ static int query_current_values_with_pending_wait(struct powernow_k8_data *data) u32 lo, hi; u32 i = 0; + if (cpu_family == CPU_HW_PSTATE) { + rdmsr(MSR_PSTATE_STATUS, lo, hi); + i = lo & HW_PSTATE_MASK; + data->currpstate = i; + + /* + * a workaround for family 11h erratum 311 might cause + * an "out-of-range Pstate if the core is in Pstate-0 + */ + if ((boot_cpu_data.x86 == 0x11) && (i >= data->numps)) + data->currpstate = HW_PSTATE_0; + + return 0; + } do { if (i++ > 10000) { pr_debug("detected change pending stuck\n"); @@ -267,6 +300,14 @@ static int decrease_vid_code_by_step(struct powernow_k8_data *data, return 0; } +/* Change hardware pstate by single MSR write */ +static int transition_pstate(struct powernow_k8_data *data, u32 pstate) +{ + wrmsr(MSR_PSTATE_CTRL, pstate, 0); + data->currpstate = pstate; + return 0; +} + /* Change Opteron/Athlon64 fid and vid, by the 3 phases. */ static int transition_fid_vid(struct powernow_k8_data *data, u32 reqfid, u32 reqvid) @@ -483,6 +524,8 @@ static int core_voltage_post_transition(struct powernow_k8_data *data, static const struct x86_cpu_id powernow_k8_ids[] = { /* IO based frequency switching */ { X86_VENDOR_AMD, 0xf }, + /* MSR based frequency switching supported */ + X86_FEATURE_MATCH(X86_FEATURE_HW_PSTATE), {} }; MODULE_DEVICE_TABLE(x86cpu, powernow_k8_ids); @@ -518,8 +561,15 @@ static void check_supported_cpu(void *_rc) "Power state transitions not supported\n"); return; } - *rc = 0; + } else { /* must be a HW Pstate capable processor */ + cpuid(CPUID_FREQ_VOLT_CAPABILITIES, &eax, &ebx, &ecx, &edx); + if ((edx & USE_HW_PSTATE) == USE_HW_PSTATE) + cpu_family = CPU_HW_PSTATE; + else + return; } + + *rc = 0; } static int check_pst_table(struct powernow_k8_data *data, struct pst_s *pst, @@ -583,11 +633,18 @@ static void print_basics(struct powernow_k8_data *data) for (j = 0; j < data->numps; j++) { if (data->powernow_table[j].frequency != CPUFREQ_ENTRY_INVALID) { + if (cpu_family == CPU_HW_PSTATE) { + printk(KERN_INFO PFX + " %d : pstate %d (%d MHz)\n", j, + data->powernow_table[j].index, + data->powernow_table[j].frequency/1000); + } else { printk(KERN_INFO PFX "fid 0x%x (%d MHz), vid 0x%x\n", data->powernow_table[j].index & 0xff, data->powernow_table[j].frequency/1000, data->powernow_table[j].index >> 8); + } } } if (data->batps) @@ -595,6 +652,20 @@ static void print_basics(struct powernow_k8_data *data) data->batps); } +static u32 freq_from_fid_did(u32 fid, u32 did) +{ + u32 mhz = 0; + + if (boot_cpu_data.x86 == 0x10) + mhz = (100 * (fid + 0x10)) >> did; + else if (boot_cpu_data.x86 == 0x11) + mhz = (100 * (fid + 8)) >> did; + else + BUG(); + + return mhz * 1000; +} + static int fill_powernow_table(struct powernow_k8_data *data, struct pst_s *pst, u8 maxvid) { @@ -754,7 +825,7 @@ static void powernow_k8_acpi_pst_values(struct powernow_k8_data *data, { u64 control; - if (!data->acpi_data.state_count) + if (!data->acpi_data.state_count || (cpu_family == CPU_HW_PSTATE)) return; control = data->acpi_data.states[index].control; @@ -805,7 +876,10 @@ static int powernow_k8_cpu_init_acpi(struct powernow_k8_data *data) data->numps = data->acpi_data.state_count; powernow_k8_acpi_pst_values(data, 0); - ret_val = fill_powernow_table_fidvid(data, powernow_table); + if (cpu_family == CPU_HW_PSTATE) + ret_val = fill_powernow_table_pstate(data, powernow_table); + else + ret_val = fill_powernow_table_fidvid(data, powernow_table); if (ret_val) goto err_out_mem; @@ -842,6 +916,51 @@ static int powernow_k8_cpu_init_acpi(struct powernow_k8_data *data) return ret_val; } +static int fill_powernow_table_pstate(struct powernow_k8_data *data, + struct cpufreq_frequency_table *powernow_table) +{ + int i; + u32 hi = 0, lo = 0; + rdmsr(MSR_PSTATE_CUR_LIMIT, lo, hi); + data->max_hw_pstate = (lo & HW_PSTATE_MAX_MASK) >> HW_PSTATE_MAX_SHIFT; + + for (i = 0; i < data->acpi_data.state_count; i++) { + u32 index; + + index = data->acpi_data.states[i].control & HW_PSTATE_MASK; + if (index > data->max_hw_pstate) { + printk(KERN_ERR PFX "invalid pstate %d - " + "bad value %d.\n", i, index); + printk(KERN_ERR PFX "Please report to BIOS " + "manufacturer\n"); + invalidate_entry(powernow_table, i); + continue; + } + + ps_to_as[index] = i; + + /* Frequency may be rounded for these */ + if ((boot_cpu_data.x86 == 0x10 && boot_cpu_data.x86_model < 10) + || boot_cpu_data.x86 == 0x11) { + + rdmsr(MSR_PSTATE_DEF_BASE + index, lo, hi); + if (!(hi & HW_PSTATE_VALID_MASK)) { + pr_debug("invalid pstate %d, ignoring\n", index); + invalidate_entry(powernow_table, i); + continue; + } + + powernow_table[i].frequency = + freq_from_fid_did(lo & 0x3f, (lo >> 6) & 7); + } else + powernow_table[i].frequency = + data->acpi_data.states[i].core_frequency * 1000; + + powernow_table[i].index = index; + } + return 0; +} + static int fill_powernow_table_fidvid(struct powernow_k8_data *data, struct cpufreq_frequency_table *powernow_table) { @@ -918,7 +1037,15 @@ static int get_transition_latency(struct powernow_k8_data *data) max_latency = cur_latency; } if (max_latency == 0) { - pr_err(FW_WARN PFX "Invalid zero transition latency\n"); + /* + * Fam 11h and later may return 0 as transition latency. This + * is intended and means "very fast". While cpufreq core and + * governors currently can handle that gracefully, better set it + * to 1 to avoid problems in the future. + */ + if (boot_cpu_data.x86 < 0x11) + printk(KERN_ERR FW_WARN PFX "Invalid zero transition " + "latency\n"); max_latency = 1; } /* value in usecs, needs to be in nanoseconds */ @@ -978,6 +1105,40 @@ static int transition_frequency_fidvid(struct powernow_k8_data *data, return res; } +/* Take a frequency, and issue the hardware pstate transition command */ +static int transition_frequency_pstate(struct powernow_k8_data *data, + unsigned int index) +{ + u32 pstate = 0; + int res, i; + struct cpufreq_freqs freqs; + + pr_debug("cpu %d transition to index %u\n", smp_processor_id(), index); + + /* get MSR index for hardware pstate transition */ + pstate = index & HW_PSTATE_MASK; + if (pstate > data->max_hw_pstate) + return -EINVAL; + + freqs.old = find_khz_freq_from_pstate(data->powernow_table, + data->currpstate); + freqs.new = find_khz_freq_from_pstate(data->powernow_table, pstate); + + for_each_cpu(i, data->available_cores) { + freqs.cpu = i; + cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE); + } + + res = transition_pstate(data, pstate); + freqs.new = find_khz_freq_from_pstate(data->powernow_table, pstate); + + for_each_cpu(i, data->available_cores) { + freqs.cpu = i; + cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); + } + return res; +} + /* Driver entry point to switch to the target frequency */ static int powernowk8_target(struct cpufreq_policy *pol, unsigned targfreq, unsigned relation) @@ -1019,15 +1180,18 @@ static int powernowk8_target(struct cpufreq_policy *pol, if (query_current_values_with_pending_wait(data)) goto err_out; - pr_debug("targ: curr fid 0x%x, vid 0x%x\n", - data->currfid, data->currvid); + if (cpu_family != CPU_HW_PSTATE) { + pr_debug("targ: curr fid 0x%x, vid 0x%x\n", + data->currfid, data->currvid); - if ((checkvid != data->currvid) || - (checkfid != data->currfid)) { - pr_info(PFX - "error - out of sync, fix 0x%x 0x%x, vid 0x%x 0x%x\n", - checkfid, data->currfid, - checkvid, data->currvid); + if ((checkvid != data->currvid) || + (checkfid != data->currfid)) { + printk(KERN_INFO PFX + "error - out of sync, fix 0x%x 0x%x, " + "vid 0x%x 0x%x\n", + checkfid, data->currfid, + checkvid, data->currvid); + } } if (cpufreq_frequency_table_target(pol, data->powernow_table, @@ -1038,8 +1202,11 @@ static int powernowk8_target(struct cpufreq_policy *pol, powernow_k8_acpi_pst_values(data, newstate); - ret = transition_frequency_fidvid(data, newstate); - + if (cpu_family == CPU_HW_PSTATE) + ret = transition_frequency_pstate(data, + data->powernow_table[newstate].index); + else + ret = transition_frequency_fidvid(data, newstate); if (ret) { printk(KERN_ERR PFX "transition frequency failed\n"); ret = 1; @@ -1048,7 +1215,11 @@ static int powernowk8_target(struct cpufreq_policy *pol, } mutex_unlock(&fidvid_mutex); - pol->cur = find_khz_freq_from_fid(data->currfid); + if (cpu_family == CPU_HW_PSTATE) + pol->cur = find_khz_freq_from_pstate(data->powernow_table, + data->powernow_table[newstate].index); + else + pol->cur = find_khz_freq_from_fid(data->currfid); ret = 0; err_out: @@ -1088,23 +1259,22 @@ static void __cpuinit powernowk8_cpu_init_on_cpu(void *_init_on_cpu) return; } - fidvid_msr_init(); + if (cpu_family == CPU_OPTERON) + fidvid_msr_init(); init_on_cpu->rc = 0; } -static const char missing_pss_msg[] = - KERN_ERR - FW_BUG PFX "No compatible ACPI _PSS objects found.\n" - FW_BUG PFX "First, make sure Cool'N'Quiet is enabled in the BIOS.\n" - FW_BUG PFX "If that doesn't help, try upgrading your BIOS.\n"; - /* per CPU init entry point to the driver */ static int __cpuinit powernowk8_cpu_init(struct cpufreq_policy *pol) { + static const char ACPI_PSS_BIOS_BUG_MSG[] = + KERN_ERR FW_BUG PFX "No compatible ACPI _PSS objects found.\n" + FW_BUG PFX "Try again with latest BIOS.\n"; struct powernow_k8_data *data; struct init_on_cpu init_on_cpu; int rc; + struct cpuinfo_x86 *c = &cpu_data(pol->cpu); if (!cpu_online(pol->cpu)) return -ENODEV; @@ -1120,6 +1290,7 @@ static int __cpuinit powernowk8_cpu_init(struct cpufreq_policy *pol) } data->cpu = pol->cpu; + data->currpstate = HW_PSTATE_INVALID; if (powernow_k8_cpu_init_acpi(data)) { /* @@ -1127,7 +1298,7 @@ static int __cpuinit powernowk8_cpu_init(struct cpufreq_policy *pol) * an UP version, and is deprecated by AMD. */ if (num_online_cpus() != 1) { - printk_once(missing_pss_msg); + printk_once(ACPI_PSS_BIOS_BUG_MSG); goto err_out; } if (pol->cpu != 0) { @@ -1156,10 +1327,17 @@ static int __cpuinit powernowk8_cpu_init(struct cpufreq_policy *pol) if (rc != 0) goto err_out_exit_acpi; - cpumask_copy(pol->cpus, cpu_core_mask(pol->cpu)); + if (cpu_family == CPU_HW_PSTATE) + cpumask_copy(pol->cpus, cpumask_of(pol->cpu)); + else + cpumask_copy(pol->cpus, cpu_core_mask(pol->cpu)); data->available_cores = pol->cpus; - pol->cur = find_khz_freq_from_fid(data->currfid); + if (cpu_family == CPU_HW_PSTATE) + pol->cur = find_khz_freq_from_pstate(data->powernow_table, + data->currpstate); + else + pol->cur = find_khz_freq_from_fid(data->currfid); pr_debug("policy current frequency %d kHz\n", pol->cur); /* min/max the cpu is capable of */ @@ -1171,10 +1349,18 @@ static int __cpuinit powernowk8_cpu_init(struct cpufreq_policy *pol) return -EINVAL; } + /* Check for APERF/MPERF support in hardware */ + if (cpu_has(c, X86_FEATURE_APERFMPERF)) + cpufreq_amd64_driver.getavg = cpufreq_get_measured_perf; + cpufreq_frequency_table_get_attr(data->powernow_table, pol->cpu); - pr_debug("cpu_init done, current fid 0x%x, vid 0x%x\n", - data->currfid, data->currvid); + if (cpu_family == CPU_HW_PSTATE) + pr_debug("cpu_init done, current pstate 0x%x\n", + data->currpstate); + else + pr_debug("cpu_init done, current fid 0x%x, vid 0x%x\n", + data->currfid, data->currvid); per_cpu(powernow_data, pol->cpu) = data; @@ -1227,15 +1413,88 @@ static unsigned int powernowk8_get(unsigned int cpu) if (err) goto out; - khz = find_khz_freq_from_fid(data->currfid); + if (cpu_family == CPU_HW_PSTATE) + khz = find_khz_freq_from_pstate(data->powernow_table, + data->currpstate); + else + khz = find_khz_freq_from_fid(data->currfid); out: return khz; } +static void _cpb_toggle_msrs(bool t) +{ + int cpu; + + get_online_cpus(); + + rdmsr_on_cpus(cpu_online_mask, MSR_K7_HWCR, msrs); + + for_each_cpu(cpu, cpu_online_mask) { + struct msr *reg = per_cpu_ptr(msrs, cpu); + if (t) + reg->l &= ~BIT(25); + else + reg->l |= BIT(25); + } + wrmsr_on_cpus(cpu_online_mask, MSR_K7_HWCR, msrs); + + put_online_cpus(); +} + +/* + * Switch on/off core performance boosting. + * + * 0=disable + * 1=enable. + */ +static void cpb_toggle(bool t) +{ + if (!cpb_capable) + return; + + if (t && !cpb_enabled) { + cpb_enabled = true; + _cpb_toggle_msrs(t); + printk(KERN_INFO PFX "Core Boosting enabled.\n"); + } else if (!t && cpb_enabled) { + cpb_enabled = false; + _cpb_toggle_msrs(t); + printk(KERN_INFO PFX "Core Boosting disabled.\n"); + } +} + +static ssize_t store_cpb(struct cpufreq_policy *policy, const char *buf, + size_t count) +{ + int ret = -EINVAL; + unsigned long val = 0; + + ret = strict_strtoul(buf, 10, &val); + if (!ret && (val == 0 || val == 1) && cpb_capable) + cpb_toggle(val); + else + return -EINVAL; + + return count; +} + +static ssize_t show_cpb(struct cpufreq_policy *policy, char *buf) +{ + return sprintf(buf, "%u\n", cpb_enabled); +} + +#define define_one_rw(_name) \ +static struct freq_attr _name = \ +__ATTR(_name, 0644, show_##_name, store_##_name) + +define_one_rw(cpb); + static struct freq_attr *powernow_k8_attr[] = { &cpufreq_freq_attr_scaling_available_freqs, + &cpb, NULL, }; @@ -1251,18 +1510,53 @@ static struct cpufreq_driver cpufreq_amd64_driver = { .attr = powernow_k8_attr, }; +/* + * Clear the boost-disable flag on the CPU_DOWN path so that this cpu + * cannot block the remaining ones from boosting. On the CPU_UP path we + * simply keep the boost-disable flag in sync with the current global + * state. + */ +static int cpb_notify(struct notifier_block *nb, unsigned long action, + void *hcpu) +{ + unsigned cpu = (long)hcpu; + u32 lo, hi; + + switch (action) { + case CPU_UP_PREPARE: + case CPU_UP_PREPARE_FROZEN: + + if (!cpb_enabled) { + rdmsr_on_cpu(cpu, MSR_K7_HWCR, &lo, &hi); + lo |= BIT(25); + wrmsr_on_cpu(cpu, MSR_K7_HWCR, lo, hi); + } + break; + + case CPU_DOWN_PREPARE: + case CPU_DOWN_PREPARE_FROZEN: + rdmsr_on_cpu(cpu, MSR_K7_HWCR, &lo, &hi); + lo &= ~BIT(25); + wrmsr_on_cpu(cpu, MSR_K7_HWCR, lo, hi); + break; + + default: + break; + } + + return NOTIFY_OK; +} + +static struct notifier_block cpb_nb = { + .notifier_call = cpb_notify, +}; + /* driver entry point for init */ static int __cpuinit powernowk8_init(void) { - unsigned int i, supported_cpus = 0; + unsigned int i, supported_cpus = 0, cpu; int rv; - if (static_cpu_has(X86_FEATURE_HW_PSTATE)) { - pr_warn(PFX "this CPU is not supported anymore, using acpi-cpufreq instead.\n"); - request_module("acpi-cpufreq"); - return -ENODEV; - } - if (!x86_match_cpu(powernow_k8_ids)) return -ENODEV; @@ -1276,13 +1570,38 @@ static int __cpuinit powernowk8_init(void) if (supported_cpus != num_online_cpus()) return -ENODEV; - rv = cpufreq_register_driver(&cpufreq_amd64_driver); + printk(KERN_INFO PFX "Found %d %s (%d cpu cores) (" VERSION ")\n", + num_online_nodes(), boot_cpu_data.x86_model_id, supported_cpus); + + if (boot_cpu_has(X86_FEATURE_CPB)) { + + cpb_capable = true; + + msrs = msrs_alloc(); + if (!msrs) { + printk(KERN_ERR "%s: Error allocating msrs!\n", __func__); + return -ENOMEM; + } + + register_cpu_notifier(&cpb_nb); - if (!rv) - pr_info(PFX "Found %d %s (%d cpu cores) (" VERSION ")\n", - num_online_nodes(), boot_cpu_data.x86_model_id, - supported_cpus); + rdmsr_on_cpus(cpu_online_mask, MSR_K7_HWCR, msrs); + for_each_cpu(cpu, cpu_online_mask) { + struct msr *reg = per_cpu_ptr(msrs, cpu); + cpb_enabled |= !(!!(reg->l & BIT(25))); + } + + printk(KERN_INFO PFX "Core Performance Boosting: %s.\n", + (cpb_enabled ? "on" : "off")); + } + + rv = cpufreq_register_driver(&cpufreq_amd64_driver); + if (rv < 0 && boot_cpu_has(X86_FEATURE_CPB)) { + unregister_cpu_notifier(&cpb_nb); + msrs_free(msrs); + msrs = NULL; + } return rv; } @@ -1291,6 +1610,13 @@ static void __exit powernowk8_exit(void) { pr_debug("exit\n"); + if (boot_cpu_has(X86_FEATURE_CPB)) { + msrs_free(msrs); + msrs = NULL; + + unregister_cpu_notifier(&cpb_nb); + } + cpufreq_unregister_driver(&cpufreq_amd64_driver); } diff --git a/trunk/drivers/cpufreq/powernow-k8.h b/trunk/drivers/cpufreq/powernow-k8.h index 79329d4d5abe..3744d26cdc2b 100644 --- a/trunk/drivers/cpufreq/powernow-k8.h +++ b/trunk/drivers/cpufreq/powernow-k8.h @@ -5,11 +5,24 @@ * http://www.gnu.org/licenses/gpl.html */ +enum pstate { + HW_PSTATE_INVALID = 0xff, + HW_PSTATE_0 = 0, + HW_PSTATE_1 = 1, + HW_PSTATE_2 = 2, + HW_PSTATE_3 = 3, + HW_PSTATE_4 = 4, + HW_PSTATE_5 = 5, + HW_PSTATE_6 = 6, + HW_PSTATE_7 = 7, +}; + struct powernow_k8_data { unsigned int cpu; u32 numps; /* number of p-states */ u32 batps; /* number of p-states supported on battery */ + u32 max_hw_pstate; /* maximum legal hardware pstate */ /* these values are constant when the PSB is used to determine * vid/fid pairings, but are modified during the ->target() call @@ -24,6 +37,7 @@ struct powernow_k8_data { /* keep track of the current fid / vid or pstate */ u32 currvid; u32 currfid; + enum pstate currpstate; /* the powernow_table includes all frequency and vid/fid pairings: * fid are the lower 8 bits of the index, vid are the upper 8 bits. @@ -83,6 +97,23 @@ struct powernow_k8_data { #define MSR_S_HI_CURRENT_VID 0x0000003f #define MSR_C_HI_STP_GNT_BENIGN 0x00000001 + +/* Hardware Pstate _PSS and MSR definitions */ +#define USE_HW_PSTATE 0x00000080 +#define HW_PSTATE_MASK 0x00000007 +#define HW_PSTATE_VALID_MASK 0x80000000 +#define HW_PSTATE_MAX_MASK 0x000000f0 +#define HW_PSTATE_MAX_SHIFT 4 +#define MSR_PSTATE_DEF_BASE 0xc0010064 /* base of Pstate MSRs */ +#define MSR_PSTATE_STATUS 0xc0010063 /* Pstate Status MSR */ +#define MSR_PSTATE_CTRL 0xc0010062 /* Pstate control MSR */ +#define MSR_PSTATE_CUR_LIMIT 0xc0010061 /* pstate current limit MSR */ + +/* define the two driver architectures */ +#define CPU_OPTERON 0 +#define CPU_HW_PSTATE 1 + + /* * There are restrictions frequencies have to follow: * - only 1 entry in the low fid table ( <=1.4GHz ) @@ -187,4 +218,5 @@ static int core_frequency_transition(struct powernow_k8_data *data, u32 reqfid); static void powernow_k8_acpi_pst_values(struct powernow_k8_data *data, unsigned int index); +static int fill_powernow_table_pstate(struct powernow_k8_data *data, struct cpufreq_frequency_table *powernow_table); static int fill_powernow_table_fidvid(struct powernow_k8_data *data, struct cpufreq_frequency_table *powernow_table); diff --git a/trunk/drivers/cpuidle/driver.c b/trunk/drivers/cpuidle/driver.c index 87db3877fead..58bf3b1ac9c4 100644 --- a/trunk/drivers/cpuidle/driver.c +++ b/trunk/drivers/cpuidle/driver.c @@ -18,10 +18,9 @@ static struct cpuidle_driver *cpuidle_curr_driver; DEFINE_SPINLOCK(cpuidle_driver_lock); int cpuidle_driver_refcount; -static void set_power_states(struct cpuidle_driver *drv) +static void __cpuidle_register_driver(struct cpuidle_driver *drv) { int i; - /* * cpuidle driver should set the drv->power_specified bit * before registering if the driver provides @@ -36,10 +35,13 @@ static void set_power_states(struct cpuidle_driver *drv) * an power value of -1. So we use -2, -3, etc, for other * c-states. */ - for (i = CPUIDLE_DRIVER_STATE_START; i < drv->state_count; i++) - drv->states[i].power_usage = -1 - i; + if (!drv->power_specified) { + for (i = CPUIDLE_DRIVER_STATE_START; i < drv->state_count; i++) + drv->states[i].power_usage = -1 - i; + } } + /** * cpuidle_register_driver - registers a driver * @drv: the driver @@ -57,16 +59,13 @@ int cpuidle_register_driver(struct cpuidle_driver *drv) spin_unlock(&cpuidle_driver_lock); return -EBUSY; } - - if (!drv->power_specified) - set_power_states(drv); - + __cpuidle_register_driver(drv); cpuidle_curr_driver = drv; - spin_unlock(&cpuidle_driver_lock); return 0; } + EXPORT_SYMBOL_GPL(cpuidle_register_driver); /** @@ -97,6 +96,7 @@ void cpuidle_unregister_driver(struct cpuidle_driver *drv) spin_unlock(&cpuidle_driver_lock); } + EXPORT_SYMBOL_GPL(cpuidle_unregister_driver); struct cpuidle_driver *cpuidle_driver_ref(void) diff --git a/trunk/drivers/cpuidle/governors/ladder.c b/trunk/drivers/cpuidle/governors/ladder.c index 9b784051ec12..b6a09ea859b1 100644 --- a/trunk/drivers/cpuidle/governors/ladder.c +++ b/trunk/drivers/cpuidle/governors/ladder.c @@ -88,8 +88,6 @@ static int ladder_select_state(struct cpuidle_driver *drv, /* consider promotion */ if (last_idx < drv->state_count - 1 && - !drv->states[last_idx + 1].disabled && - !dev->states_usage[last_idx + 1].disable && last_residency > last_state->threshold.promotion_time && drv->states[last_idx + 1].exit_latency <= latency_req) { last_state->stats.promotion_count++; @@ -102,9 +100,7 @@ static int ladder_select_state(struct cpuidle_driver *drv, /* consider demotion */ if (last_idx > CPUIDLE_DRIVER_STATE_START && - (drv->states[last_idx].disabled || - dev->states_usage[last_idx].disable || - drv->states[last_idx].exit_latency > latency_req)) { + drv->states[last_idx].exit_latency > latency_req) { int i; for (i = last_idx - 1; i > CPUIDLE_DRIVER_STATE_START; i--) { diff --git a/trunk/drivers/xen/xen-acpi-processor.c b/trunk/drivers/xen/xen-acpi-processor.c index 316df65163cf..b590ee067fcd 100644 --- a/trunk/drivers/xen/xen-acpi-processor.c +++ b/trunk/drivers/xen/xen-acpi-processor.c @@ -98,6 +98,7 @@ static int push_cxx_to_hypervisor(struct acpi_processor *_pr) dst_cx->type = cx->type; dst_cx->latency = cx->latency; + dst_cx->power = cx->power; dst_cx->dpcnt = 0; set_xen_guest_handle(dst_cx->dp, NULL); diff --git a/trunk/include/acpi/processor.h b/trunk/include/acpi/processor.h index 555d0337ad95..64ec644808bc 100644 --- a/trunk/include/acpi/processor.h +++ b/trunk/include/acpi/processor.h @@ -3,6 +3,7 @@ #include #include +#include #include #include @@ -58,11 +59,13 @@ struct acpi_processor_cx { u8 entry_method; u8 index; u32 latency; + u32 power; u8 bm_sts_skip; char desc[ACPI_CX_DESC_LEN]; }; struct acpi_processor_power { + struct cpuidle_device dev; struct acpi_processor_cx *state; unsigned long bm_check_timestamp; u32 default_state; @@ -322,10 +325,12 @@ extern void acpi_processor_reevaluate_tstate(struct acpi_processor *pr, extern const struct file_operations acpi_processor_throttling_fops; extern void acpi_processor_throttling_init(void); /* in processor_idle.c */ -int acpi_processor_power_init(struct acpi_processor *pr); -int acpi_processor_power_exit(struct acpi_processor *pr); +int acpi_processor_power_init(struct acpi_processor *pr, + struct acpi_device *device); int acpi_processor_cst_has_changed(struct acpi_processor *pr); int acpi_processor_hotplug(struct acpi_processor *pr); +int acpi_processor_power_exit(struct acpi_processor *pr, + struct acpi_device *device); int acpi_processor_suspend(struct device *dev); int acpi_processor_resume(struct device *dev); extern struct cpuidle_driver acpi_idle_driver; diff --git a/trunk/include/linux/clockchips.h b/trunk/include/linux/clockchips.h index 8a7096fcb01e..acba894374a1 100644 --- a/trunk/include/linux/clockchips.h +++ b/trunk/include/linux/clockchips.h @@ -97,8 +97,6 @@ struct clock_event_device { void (*broadcast)(const struct cpumask *mask); void (*set_mode)(enum clock_event_mode mode, struct clock_event_device *); - void (*suspend)(struct clock_event_device *); - void (*resume)(struct clock_event_device *); unsigned long min_delta_ticks; unsigned long max_delta_ticks; @@ -158,9 +156,6 @@ clockevents_calc_mult_shift(struct clock_event_device *ce, u32 freq, u32 minsec) freq, minsec); } -extern void clockevents_suspend(void); -extern void clockevents_resume(void); - #ifdef CONFIG_GENERIC_CLOCKEVENTS extern void clockevents_notify(unsigned long reason, void *arg); #else @@ -169,9 +164,6 @@ extern void clockevents_notify(unsigned long reason, void *arg); #else /* CONFIG_GENERIC_CLOCKEVENTS_BUILD */ -static inline void clockevents_suspend(void) {} -static inline void clockevents_resume(void) {} - #define clockevents_notify(reason, arg) do { } while (0) #endif diff --git a/trunk/include/linux/device.h b/trunk/include/linux/device.h index 86529e642d6c..52a5f15a2223 100644 --- a/trunk/include/linux/device.h +++ b/trunk/include/linux/device.h @@ -772,13 +772,6 @@ static inline void pm_suspend_ignore_children(struct device *dev, bool enable) dev->power.ignore_children = enable; } -static inline void dev_pm_syscore_device(struct device *dev, bool val) -{ -#ifdef CONFIG_PM_SLEEP - dev->power.syscore = val; -#endif -} - static inline void device_lock(struct device *dev) { mutex_lock(&dev->mutex); diff --git a/trunk/include/linux/opp.h b/trunk/include/linux/opp.h index 214e0ebcb84d..2a4e5faee904 100644 --- a/trunk/include/linux/opp.h +++ b/trunk/include/linux/opp.h @@ -48,14 +48,6 @@ int opp_disable(struct device *dev, unsigned long freq); struct srcu_notifier_head *opp_get_notifier(struct device *dev); -#ifdef CONFIG_OF -int of_init_opp_table(struct device *dev); -#else -static inline int of_init_opp_table(struct device *dev) -{ - return -EINVAL; -} -#endif /* CONFIG_OF */ #else static inline unsigned long opp_get_voltage(struct opp *opp) { diff --git a/trunk/include/linux/pm.h b/trunk/include/linux/pm.h index 44d1f2307dbc..f067e60a3832 100644 --- a/trunk/include/linux/pm.h +++ b/trunk/include/linux/pm.h @@ -510,14 +510,12 @@ struct dev_pm_info { bool is_prepared:1; /* Owned by the PM core */ bool is_suspended:1; /* Ditto */ bool ignore_children:1; - bool early_init:1; /* Owned by the PM core */ spinlock_t lock; #ifdef CONFIG_PM_SLEEP struct list_head entry; struct completion completion; struct wakeup_source *wakeup; bool wakeup_path:1; - bool syscore:1; #else unsigned int should_wakeup:1; #endif diff --git a/trunk/include/linux/pm_domain.h b/trunk/include/linux/pm_domain.h index 7c1d252b20c0..a7d6172922d4 100644 --- a/trunk/include/linux/pm_domain.h +++ b/trunk/include/linux/pm_domain.h @@ -114,6 +114,7 @@ struct generic_pm_domain_data { struct mutex lock; unsigned int refcount; bool need_restore; + bool always_on; }; #ifdef CONFIG_PM_GENERIC_DOMAINS @@ -138,32 +139,36 @@ extern int __pm_genpd_of_add_device(struct device_node *genpd_node, struct device *dev, struct gpd_timing_data *td); -extern int __pm_genpd_name_add_device(const char *domain_name, - struct device *dev, - struct gpd_timing_data *td); +static inline int pm_genpd_add_device(struct generic_pm_domain *genpd, + struct device *dev) +{ + return __pm_genpd_add_device(genpd, dev, NULL); +} + +static inline int pm_genpd_of_add_device(struct device_node *genpd_node, + struct device *dev) +{ + return __pm_genpd_of_add_device(genpd_node, dev, NULL); +} extern int pm_genpd_remove_device(struct generic_pm_domain *genpd, struct device *dev); +extern void pm_genpd_dev_always_on(struct device *dev, bool val); extern void pm_genpd_dev_need_restore(struct device *dev, bool val); extern int pm_genpd_add_subdomain(struct generic_pm_domain *genpd, struct generic_pm_domain *new_subdomain); -extern int pm_genpd_add_subdomain_names(const char *master_name, - const char *subdomain_name); extern int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd, struct generic_pm_domain *target); extern int pm_genpd_add_callbacks(struct device *dev, struct gpd_dev_ops *ops, struct gpd_timing_data *td); extern int __pm_genpd_remove_callbacks(struct device *dev, bool clear_td); -extern int pm_genpd_attach_cpuidle(struct generic_pm_domain *genpd, int state); -extern int pm_genpd_name_attach_cpuidle(const char *name, int state); -extern int pm_genpd_detach_cpuidle(struct generic_pm_domain *genpd); -extern int pm_genpd_name_detach_cpuidle(const char *name); +extern int genpd_attach_cpuidle(struct generic_pm_domain *genpd, int state); +extern int genpd_detach_cpuidle(struct generic_pm_domain *genpd); extern void pm_genpd_init(struct generic_pm_domain *genpd, struct dev_power_governor *gov, bool is_off); extern int pm_genpd_poweron(struct generic_pm_domain *genpd); -extern int pm_genpd_name_poweron(const char *domain_name); extern bool default_stop_ok(struct device *dev); @@ -184,15 +189,8 @@ static inline int __pm_genpd_add_device(struct generic_pm_domain *genpd, { return -ENOSYS; } -static inline int __pm_genpd_of_add_device(struct device_node *genpd_node, - struct device *dev, - struct gpd_timing_data *td) -{ - return -ENOSYS; -} -static inline int __pm_genpd_name_add_device(const char *domain_name, - struct device *dev, - struct gpd_timing_data *td) +static inline int pm_genpd_add_device(struct generic_pm_domain *genpd, + struct device *dev) { return -ENOSYS; } @@ -201,17 +199,13 @@ static inline int pm_genpd_remove_device(struct generic_pm_domain *genpd, { return -ENOSYS; } +static inline void pm_genpd_dev_always_on(struct device *dev, bool val) {} static inline void pm_genpd_dev_need_restore(struct device *dev, bool val) {} static inline int pm_genpd_add_subdomain(struct generic_pm_domain *genpd, struct generic_pm_domain *new_sd) { return -ENOSYS; } -static inline int pm_genpd_add_subdomain_names(const char *master_name, - const char *subdomain_name) -{ - return -ENOSYS; -} static inline int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd, struct generic_pm_domain *target) { @@ -227,19 +221,11 @@ static inline int __pm_genpd_remove_callbacks(struct device *dev, bool clear_td) { return -ENOSYS; } -static inline int pm_genpd_attach_cpuidle(struct generic_pm_domain *genpd, int st) +static inline int genpd_attach_cpuidle(struct generic_pm_domain *genpd, int st) { return -ENOSYS; } -static inline int pm_genpd_name_attach_cpuidle(const char *name, int state) -{ - return -ENOSYS; -} -static inline int pm_genpd_detach_cpuidle(struct generic_pm_domain *genpd) -{ - return -ENOSYS; -} -static inline int pm_genpd_name_detach_cpuidle(const char *name) +static inline int genpd_detach_cpuidle(struct generic_pm_domain *genpd) { return -ENOSYS; } @@ -251,10 +237,6 @@ static inline int pm_genpd_poweron(struct generic_pm_domain *genpd) { return -ENOSYS; } -static inline int pm_genpd_name_poweron(const char *domain_name) -{ - return -ENOSYS; -} static inline bool default_stop_ok(struct device *dev) { return false; @@ -263,24 +245,6 @@ static inline bool default_stop_ok(struct device *dev) #define pm_domain_always_on_gov NULL #endif -static inline int pm_genpd_add_device(struct generic_pm_domain *genpd, - struct device *dev) -{ - return __pm_genpd_add_device(genpd, dev, NULL); -} - -static inline int pm_genpd_of_add_device(struct device_node *genpd_node, - struct device *dev) -{ - return __pm_genpd_of_add_device(genpd_node, dev, NULL); -} - -static inline int pm_genpd_name_add_device(const char *domain_name, - struct device *dev) -{ - return __pm_genpd_name_add_device(domain_name, dev, NULL); -} - static inline int pm_genpd_remove_callbacks(struct device *dev) { return __pm_genpd_remove_callbacks(dev, true); @@ -294,20 +258,4 @@ static inline void genpd_queue_power_off_work(struct generic_pm_domain *gpd) {} static inline void pm_genpd_poweroff_unused(void) {} #endif -#ifdef CONFIG_PM_GENERIC_DOMAINS_SLEEP -extern void pm_genpd_syscore_switch(struct device *dev, bool suspend); -#else -static inline void pm_genpd_syscore_switch(struct device *dev, bool suspend) {} -#endif - -static inline void pm_genpd_syscore_poweroff(struct device *dev) -{ - pm_genpd_syscore_switch(dev, true); -} - -static inline void pm_genpd_syscore_poweron(struct device *dev) -{ - pm_genpd_syscore_switch(dev, false); -} - #endif /* _LINUX_PM_DOMAIN_H */ diff --git a/trunk/kernel/power/Kconfig b/trunk/kernel/power/Kconfig index 5dfdc9ea180b..a70518c9d82f 100644 --- a/trunk/kernel/power/Kconfig +++ b/trunk/kernel/power/Kconfig @@ -263,10 +263,6 @@ config PM_GENERIC_DOMAINS bool depends on PM -config PM_GENERIC_DOMAINS_SLEEP - def_bool y - depends on PM_SLEEP && PM_GENERIC_DOMAINS - config PM_GENERIC_DOMAINS_RUNTIME def_bool y depends on PM_RUNTIME && PM_GENERIC_DOMAINS diff --git a/trunk/kernel/power/poweroff.c b/trunk/kernel/power/poweroff.c index 68197a4e8fc9..d52359374e85 100644 --- a/trunk/kernel/power/poweroff.c +++ b/trunk/kernel/power/poweroff.c @@ -37,7 +37,7 @@ static struct sysrq_key_op sysrq_poweroff_op = { .enable_mask = SYSRQ_ENABLE_BOOT, }; -static int __init pm_sysrq_init(void) +static int pm_sysrq_init(void) { register_sysrq_key('o', &sysrq_poweroff_op); return 0; diff --git a/trunk/kernel/power/process.c b/trunk/kernel/power/process.c index 87da817f9e13..19db29f67558 100644 --- a/trunk/kernel/power/process.c +++ b/trunk/kernel/power/process.c @@ -79,7 +79,7 @@ static int try_to_freeze_tasks(bool user_only) /* * We need to retry, but first give the freezing tasks some - * time to enter the refrigerator. + * time to enter the regrigerator. */ msleep(10); } diff --git a/trunk/kernel/power/qos.c b/trunk/kernel/power/qos.c index 846bd42c7ed1..6a031e684026 100644 --- a/trunk/kernel/power/qos.c +++ b/trunk/kernel/power/qos.c @@ -139,7 +139,6 @@ static inline int pm_qos_get_value(struct pm_qos_constraints *c) default: /* runtime check for not using enum */ BUG(); - return PM_QOS_DEFAULT_VALUE; } } diff --git a/trunk/kernel/time/clockevents.c b/trunk/kernel/time/clockevents.c index 30b6de0d977c..7e1ce012a851 100644 --- a/trunk/kernel/time/clockevents.c +++ b/trunk/kernel/time/clockevents.c @@ -397,30 +397,6 @@ void clockevents_exchange_device(struct clock_event_device *old, local_irq_restore(flags); } -/** - * clockevents_suspend - suspend clock devices - */ -void clockevents_suspend(void) -{ - struct clock_event_device *dev; - - list_for_each_entry_reverse(dev, &clockevent_devices, list) - if (dev->suspend) - dev->suspend(dev); -} - -/** - * clockevents_resume - resume clock devices - */ -void clockevents_resume(void) -{ - struct clock_event_device *dev; - - list_for_each_entry(dev, &clockevent_devices, list) - if (dev->resume) - dev->resume(dev); -} - #ifdef CONFIG_GENERIC_CLOCKEVENTS /** * clockevents_notify - notification about relevant events diff --git a/trunk/kernel/time/timekeeping.c b/trunk/kernel/time/timekeeping.c index 312a675cb240..34e5eac81424 100644 --- a/trunk/kernel/time/timekeeping.c +++ b/trunk/kernel/time/timekeeping.c @@ -773,7 +773,6 @@ static void timekeeping_resume(void) read_persistent_clock(&ts); - clockevents_resume(); clocksource_resume(); write_seqlock_irqsave(&tk->lock, flags); @@ -833,7 +832,6 @@ static int timekeeping_suspend(void) clockevents_notify(CLOCK_EVT_NOTIFY_SUSPEND, NULL); clocksource_suspend(); - clockevents_suspend(); return 0; }