From 27f850880192c19a26d89dab19e2bddcf276032f Mon Sep 17 00:00:00 2001 From: Kajetan Puchalski Date: Thu, 5 Jan 2023 14:51:58 +0000 Subject: [PATCH 01/13] cpuidle: teo: Optionally skip polling states in teo_find_shallower_state() Add a no_poll flag to teo_find_shallower_state() that will let the function optionally not consider polling states. This allows the caller to guard against the function inadvertently resulting in TEO putting the CPU in a polling state when that behaviour is undesirable. Signed-off-by: Kajetan Puchalski Signed-off-by: Rafael J. Wysocki --- drivers/cpuidle/governors/teo.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/drivers/cpuidle/governors/teo.c b/drivers/cpuidle/governors/teo.c index d9262db79cae5..e2864474a98d6 100644 --- a/drivers/cpuidle/governors/teo.c +++ b/drivers/cpuidle/governors/teo.c @@ -258,15 +258,17 @@ static s64 teo_middle_of_bin(int idx, struct cpuidle_driver *drv) * @dev: Target CPU. * @state_idx: Index of the capping idle state. * @duration_ns: Idle duration value to match. + * @no_poll: Don't consider polling states. */ static int teo_find_shallower_state(struct cpuidle_driver *drv, struct cpuidle_device *dev, int state_idx, - s64 duration_ns) + s64 duration_ns, bool no_poll) { int i; for (i = state_idx - 1; i >= 0; i--) { - if (dev->states_usage[i].disable) + if (dev->states_usage[i].disable || + (no_poll && drv->states[i].flags & CPUIDLE_FLAG_POLLING)) continue; state_idx = i; @@ -469,7 +471,7 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev, */ if (idx > idx0 && drv->states[idx].target_residency_ns > delta_tick) - idx = teo_find_shallower_state(drv, dev, idx, delta_tick); + idx = teo_find_shallower_state(drv, dev, idx, delta_tick, false); } return idx; From 9ce0f7c4bc64d820b02a1c53f7e8dba9539f942b Mon Sep 17 00:00:00 2001 From: Kajetan Puchalski Date: Thu, 5 Jan 2023 14:51:59 +0000 Subject: [PATCH 02/13] cpuidle: teo: Introduce util-awareness MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Modern interactive systems, such as recent Android phones, tend to have power efficient shallow idle states. Selecting deeper idle states on a device while a latency-sensitive workload is running can adversely impact performance due to increased latency. Additionally, if the CPU wakes up from a deeper sleep before its target residency as is often the case, it results in a waste of energy on top of that. At the moment, none of the available idle governors take any scheduling information into account. They also tend to overestimate the idle duration quite often, which causes them to select excessively deep idle states, thus leading to increased wakeup latency and lower performance with no power saving. For 'menu' while web browsing on Android for instance, those types of wakeups ('too deep') account for over 24% of all wakeups. At the same time, on some platforms idle state 0 can be power efficient enough to warrant wanting to prefer it over idle state 1. This is because the power usage of the two states can be so close that sufficient amounts of too deep state 1 sleeps can completely offset the state 1 power saving to the point where it would've been more power efficient to just use state 0 instead. This is, of course, for systems where state 0 is not a polling state, such as arm-based devices. Sleeps that happened in state 0 while they could have used state 1 ('too shallow') only save less power than they otherwise could have. Too deep sleeps, on the other hand, harm performance and nullify the potential power saving from using state 1 in the first place. While taking this into account, it is clear that on balance it is preferable for an idle governor to have more too shallow sleeps instead of more too deep sleeps on those kinds of platforms. This patch specifically tunes TEO to prefer shallower idle states in order to reduce wakeup latency and achieve better performance. To this end, before selecting the next idle state it uses the avg_util signal of a CPU's runqueue in order to determine to what extent the CPU is being utilized. This util value is then compared to a threshold defined as a percentage of the CPU's capacity (capacity >> 6 ie. ~1.5% in the current implementation). If the util is above the threshold, the index of the idle state selected by TEO metrics will be reduced by 1, thus selecting a shallower state. If the util is below the threshold, the governor defaults to the TEO metrics mechanism to try to select the deepest available idle state based on the closest timer event and its own correctness. The main goal of this is to reduce latency and increase performance for some workloads. Under some workloads it will result in an increase in power usage (Geekbench 5) while for other workloads it will also result in a decrease in power usage compared to TEO (PCMark Web, Jankbench, Speedometer). It can provide drastically decreased latency and performance benefits in certain types of workloads that are sensitive to latency. Example test results: 1. GB5 (better score, latency & more power usage) | metric | menu | teo | teo-util-aware | | ------------------------------------- | -------------- | ----------------- | ----------------- | | gmean score | 2826.5 (0.0%) | 2764.8 (-2.18%) | 2865 (1.36%) | | gmean power usage [mW] | 2551.4 (0.0%) | 2606.8 (2.17%) | 2722.3 (6.7%) | | gmean too deep % | 14.99% | 9.65% | 4.02% | | gmean too shallow % | 2.5% | 5.96% | 14.59% | | gmean task wakeup latency (asynctask) | 78.16μs (0.0%) | 61.60μs (-21.19%) | 54.45μs (-30.34%) | 2. Jankbench (better score, latency & less power usage) | metric | menu | teo | teo-util-aware | | ------------------------------------- | -------------- | ----------------- | ----------------- | | gmean frame duration | 13.9 (0.0%) | 14.7 (6.0%) | 12.6 (-9.0%) | | gmean jank percentage | 1.5 (0.0%) | 2.1 (36.99%) | 1.3 (-17.37%) | | gmean power usage [mW] | 144.6 (0.0%) | 136.9 (-5.27%) | 121.3 (-16.08%) | | gmean too deep % | 26.00% | 11.00% | 2.54% | | gmean too shallow % | 4.74% | 11.89% | 21.93% | | gmean wakeup latency (RenderThread) | 139.5μs (0.0%) | 116.5μs (-16.49%) | 91.11μs (-34.7%) | | gmean wakeup latency (surfaceflinger) | 124.0μs (0.0%) | 151.9μs (22.47%) | 87.65μs (-29.33%) | Signed-off-by: Kajetan Puchalski [ rjw: Comment edits and white space adjustments ] Signed-off-by: Rafael J. Wysocki --- drivers/cpuidle/governors/teo.c | 94 ++++++++++++++++++++++++++++++++- 1 file changed, 93 insertions(+), 1 deletion(-) diff --git a/drivers/cpuidle/governors/teo.c b/drivers/cpuidle/governors/teo.c index e2864474a98d6..987fc5f3997dc 100644 --- a/drivers/cpuidle/governors/teo.c +++ b/drivers/cpuidle/governors/teo.c @@ -2,8 +2,13 @@ /* * Timer events oriented CPU idle governor * + * TEO governor: * Copyright (C) 2018 - 2021 Intel Corporation * Author: Rafael J. Wysocki + * + * Util-awareness mechanism: + * Copyright (C) 2022 Arm Ltd. + * Author: Kajetan Puchalski */ /** @@ -99,14 +104,55 @@ * select the given idle state instead of the candidate one. * * 3. By default, select the candidate state. + * + * Util-awareness mechanism: + * + * The idea behind the util-awareness extension is that there are two distinct + * scenarios for the CPU which should result in two different approaches to idle + * state selection - utilized and not utilized. + * + * In this case, 'utilized' means that the average runqueue util of the CPU is + * above a certain threshold. + * + * When the CPU is utilized while going into idle, more likely than not it will + * be woken up to do more work soon and so a shallower idle state should be + * selected to minimise latency and maximise performance. When the CPU is not + * being utilized, the usual metrics-based approach to selecting the deepest + * available idle state should be preferred to take advantage of the power + * saving. + * + * In order to achieve this, the governor uses a utilization threshold. + * The threshold is computed per-CPU as a percentage of the CPU's capacity + * by bit shifting the capacity value. Based on testing, the shift of 6 (~1.56%) + * seems to be getting the best results. + * + * Before selecting the next idle state, the governor compares the current CPU + * util to the precomputed util threshold. If it's below, it defaults to the + * TEO metrics mechanism. If it's above, the closest shallower idle state will + * be selected instead, as long as is not a polling state. */ #include #include #include +#include #include +#include #include +/* + * The number of bits to shift the CPU's capacity by in order to determine + * the utilized threshold. + * + * 6 was chosen based on testing as the number that achieved the best balance + * of power and performance on average. + * + * The resulting threshold is high enough to not be triggered by background + * noise and low enough to react quickly when activity starts to ramp up. + */ +#define UTIL_THRESHOLD_SHIFT 6 + + /* * The PULSE value is added to metrics when they grow and the DECAY_SHIFT value * is used for decreasing metrics on a regular basis. @@ -137,9 +183,11 @@ struct teo_bin { * @time_span_ns: Time between idle state selection and post-wakeup update. * @sleep_length_ns: Time till the closest timer event (at the selection time). * @state_bins: Idle state data bins for this CPU. - * @total: Grand total of the "intercepts" and "hits" mertics for all bins. + * @total: Grand total of the "intercepts" and "hits" metrics for all bins. * @next_recent_idx: Index of the next @recent_idx entry to update. * @recent_idx: Indices of bins corresponding to recent "intercepts". + * @util_threshold: Threshold above which the CPU is considered utilized + * @utilized: Whether the last sleep on the CPU happened while utilized */ struct teo_cpu { s64 time_span_ns; @@ -148,10 +196,29 @@ struct teo_cpu { unsigned int total; int next_recent_idx; int recent_idx[NR_RECENT]; + unsigned long util_threshold; + bool utilized; }; static DEFINE_PER_CPU(struct teo_cpu, teo_cpus); +/** + * teo_cpu_is_utilized - Check if the CPU's util is above the threshold + * @cpu: Target CPU + * @cpu_data: Governor CPU data for the target CPU + */ +#ifdef CONFIG_SMP +static bool teo_cpu_is_utilized(int cpu, struct teo_cpu *cpu_data) +{ + return sched_cpu_util(cpu) > cpu_data->util_threshold; +} +#else +static bool teo_cpu_is_utilized(int cpu, struct teo_cpu *cpu_data) +{ + return false; +} +#endif + /** * teo_update - Update CPU metrics after wakeup. * @drv: cpuidle driver containing state data. @@ -323,6 +390,22 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev, goto end; } + cpu_data->utilized = teo_cpu_is_utilized(dev->cpu, cpu_data); + /* + * If the CPU is being utilized over the threshold and there are only 2 + * states to choose from, the metrics need not be considered, so choose + * the shallowest non-polling state and exit. + */ + if (drv->state_count < 3 && cpu_data->utilized) { + for (i = 0; i < drv->state_count; ++i) { + if (!dev->states_usage[i].disable && + !(drv->states[i].flags & CPUIDLE_FLAG_POLLING)) { + idx = i; + goto end; + } + } + } + /* * Find the deepest idle state whose target residency does not exceed * the current sleep length and the deepest idle state not deeper than @@ -454,6 +537,13 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev, if (idx > constraint_idx) idx = constraint_idx; + /* + * If the CPU is being utilized over the threshold, choose a shallower + * non-polling state to improve latency + */ + if (cpu_data->utilized) + idx = teo_find_shallower_state(drv, dev, idx, duration_ns, true); + end: /* * Don't stop the tick if the selected state is a polling one or if the @@ -510,9 +600,11 @@ static int teo_enable_device(struct cpuidle_driver *drv, struct cpuidle_device *dev) { struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu); + unsigned long max_capacity = arch_scale_cpu_capacity(dev->cpu); int i; memset(cpu_data, 0, sizeof(*cpu_data)); + cpu_data->util_threshold = max_capacity >> UTIL_THRESHOLD_SHIFT; for (i = 0; i < NR_RECENT; i++) cpu_data->recent_idx[i] = -1; From 4edc13ae891ab368a3026cde1dbf263331bc6e77 Mon Sep 17 00:00:00 2001 From: Li RongQing Date: Fri, 9 Dec 2022 17:25:23 +0800 Subject: [PATCH 03/13] cpuidle-haltpoll: select haltpoll governor The haltpoll cpuidle driver should select the haltpoll governor, so as to ensure that they work together. Signed-off-by: Li RongQing [ rjw: Changelog edits ] Signed-off-by: Rafael J. Wysocki --- drivers/cpuidle/Kconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/cpuidle/Kconfig b/drivers/cpuidle/Kconfig index ff71dd662880d..cac5997dca505 100644 --- a/drivers/cpuidle/Kconfig +++ b/drivers/cpuidle/Kconfig @@ -74,6 +74,7 @@ endmenu config HALTPOLL_CPUIDLE tristate "Halt poll cpuidle driver" depends on X86 && KVM_GUEST + select CPU_IDLE_GOV_HALTPOLL default y help This option enables halt poll cpuidle driver, which allows to poll From 450316dc4f41e857c928dfbcc495c3810d4b1928 Mon Sep 17 00:00:00 2001 From: Richard Fitzgerald Date: Tue, 13 Dec 2022 15:54:48 +0000 Subject: [PATCH 04/13] PM: runtime: Document that force_suspend() is incompatible with SMART_SUSPEND pm_runtime_force_suspend() cannot be used with DPM_FLAG_SMART_SUSPEND, so note this in the kerneldoc. If DPM_FLAG_SMART_SUSPEND is set and the PM core cannot skip system resume it will call pm_runtime_active() on the driver. This can lead to an inconsistent state where: pm_runtime_force_suspend() called ->runtime_suspend but device_resume_noirq() called pm_runtime_set_active() This leaves the driver actually suspended but marked as active. Signed-off-by: Richard Fitzgerald Signed-off-by: Rafael J. Wysocki --- drivers/base/power/runtime.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c index 50e726b6c2cfa..b29be7d4d7d0e 100644 --- a/drivers/base/power/runtime.c +++ b/drivers/base/power/runtime.c @@ -1864,6 +1864,10 @@ static bool pm_runtime_need_not_resume(struct device *dev) * sure the device is put into low power state and it should only be used during * system-wide PM transitions to sleep states. It assumes that the analogous * pm_runtime_force_resume() will be used to resume the device. + * + * Do not use with DPM_FLAG_SMART_SUSPEND as this can lead to an inconsistent + * state where this function has called the ->runtime_suspend callback but the + * PM core marks the driver as runtime active. */ int pm_runtime_force_suspend(struct device *dev) { From 6b37dfcb39f6b7d2e5070181d878a4c373b24bb0 Mon Sep 17 00:00:00 2001 From: Randy Dunlap Date: Mon, 2 Jan 2023 19:28:40 -0800 Subject: [PATCH 05/13] PM: hibernate: swap: don't use /** for non-kernel-doc comments kernel-doc complains about multiple occurrences of "/**" being used for something that is not a kernel-doc comment, so change all of these to just use "/*" comment style. The warning message for all of these is: FILE:LINE: warning: This comment starts with '/**', but isn't a kernel-doc comment. Refer Documentation/doc-guide/kernel-doc.rst kernel/power/swap.c:585: warning: ... Structure used for CRC32. kernel/power/swap.c:600: warning: ... * CRC32 update function that runs in its own thread. kernel/power/swap.c:627: warning: ... * Structure used for LZO data compression. kernel/power/swap.c:644: warning: ... * Compression function that runs in its own thread. kernel/power/swap.c:952: warning: ... * The following functions allow us to read data using a swap map kernel/power/swap.c:1111: warning: ... * Structure used for LZO data decompression. kernel/power/swap.c:1127: warning: ... * Decompression function that runs in its own thread. Also correct one spello/typo. Signed-off-by: Randy Dunlap Signed-off-by: Rafael J. Wysocki --- kernel/power/swap.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/kernel/power/swap.c b/kernel/power/swap.c index 277434b6c0bfd..36a1df48280c3 100644 --- a/kernel/power/swap.c +++ b/kernel/power/swap.c @@ -581,7 +581,7 @@ static int save_image(struct swap_map_handle *handle, return ret; } -/** +/* * Structure used for CRC32. */ struct crc_data { @@ -596,7 +596,7 @@ struct crc_data { unsigned char *unc[LZO_THREADS]; /* uncompressed data */ }; -/** +/* * CRC32 update function that runs in its own thread. */ static int crc32_threadfn(void *data) @@ -623,7 +623,7 @@ static int crc32_threadfn(void *data) } return 0; } -/** +/* * Structure used for LZO data compression. */ struct cmp_data { @@ -640,7 +640,7 @@ struct cmp_data { unsigned char wrk[LZO1X_1_MEM_COMPRESS]; /* compression workspace */ }; -/** +/* * Compression function that runs in its own thread. */ static int lzo_compress_threadfn(void *data) @@ -948,9 +948,9 @@ int swsusp_write(unsigned int flags) return error; } -/** +/* * The following functions allow us to read data using a swap map - * in a file-alike way + * in a file-like way. */ static void release_swap_reader(struct swap_map_handle *handle) @@ -1107,7 +1107,7 @@ static int load_image(struct swap_map_handle *handle, return ret; } -/** +/* * Structure used for LZO data decompression. */ struct dec_data { @@ -1123,7 +1123,7 @@ struct dec_data { unsigned char cmp[LZO_CMP_SIZE]; /* compressed buffer */ }; -/** +/* * Decompression function that runs in its own thread. */ static int lzo_decompress_threadfn(void *data) From 716ff71ae234fd3ef1286aac8d8a19a0ed2d509d Mon Sep 17 00:00:00 2001 From: Li RongQing Date: Fri, 6 Jan 2023 12:03:42 +0800 Subject: [PATCH 06/13] cpuidle-haltpoll: Replace default_idle() with arch_cpu_idle() When a KVM guest has MWAIT, mwait_idle() is used as the default idle function. However, the cpuidle-haltpoll driver calls default_idle() from default_enter_idle() directly and that one uses HLT instead of MWAIT, which may affect performance adversely, because MWAIT is preferred to HLT as explained by the changelog of commit aebef63cf7ff ("x86: Remove vendor checks from prefer_mwait_c1_over_halt"). Make default_enter_idle() call arch_cpu_idle(), which can use MWAIT, instead of default_idle() to address this issue. Suggested-by: Thomas Gleixner Suggested-by: Rafael J. Wysocki Signed-off-by: Li RongQing [ rjw: Changelog rewrite ] Signed-off-by: Rafael J. Wysocki --- arch/x86/kernel/process.c | 1 + drivers/cpuidle/cpuidle-haltpoll.c | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c index 40d156a31676e..00a831d56c500 100644 --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -721,6 +721,7 @@ void arch_cpu_idle(void) { x86_idle(); } +EXPORT_SYMBOL_GPL(arch_cpu_idle); /* * We use this if we don't have any better idle routine.. diff --git a/drivers/cpuidle/cpuidle-haltpoll.c b/drivers/cpuidle/cpuidle-haltpoll.c index 3a39a7f48b771..e66df22f96955 100644 --- a/drivers/cpuidle/cpuidle-haltpoll.c +++ b/drivers/cpuidle/cpuidle-haltpoll.c @@ -32,7 +32,7 @@ static int default_enter_idle(struct cpuidle_device *dev, local_irq_enable(); return index; } - default_idle(); + arch_cpu_idle(); return index; } From 52e0452b413d885d5ab7e3ae85287d67f5a286b2 Mon Sep 17 00:00:00 2001 From: "Paul E. McKenney" Date: Thu, 12 Jan 2023 16:11:28 -0800 Subject: [PATCH 07/13] PM: sleep: Remove "select SRCU" Now that the SRCU Kconfig option is unconditionally selected, there is no longer any point in selecting it. Therefore, remove the "select SRCU" Kconfig statements. Signed-off-by: Paul E. McKenney Reviewed-by: John Ogness Signed-off-by: Rafael J. Wysocki --- kernel/power/Kconfig | 1 - 1 file changed, 1 deletion(-) diff --git a/kernel/power/Kconfig b/kernel/power/Kconfig index 60a1d3051cc79..4b31629c5be4b 100644 --- a/kernel/power/Kconfig +++ b/kernel/power/Kconfig @@ -118,7 +118,6 @@ config PM_SLEEP def_bool y depends on SUSPEND || HIBERNATE_CALLBACKS select PM - select SRCU config PM_SLEEP_SMP def_bool y From 74528edfbc664f9d2c927c4e5a44f1285598ed0f Mon Sep 17 00:00:00 2001 From: Artem Bityutskiy Date: Fri, 20 Jan 2023 11:15:28 +0200 Subject: [PATCH 08/13] intel_idle: add Emerald Rapids Xeon support Emerald Rapids (EMR) is the next Intel Xeon processor after Sapphire Rapids (SPR). EMR C-states are the same as SPR C-states, and we expect that EMR C-state characteristics (latency and target residency) will be the same as in SPR. Therefore, add EMR support by using SPR C-states table. Signed-off-by: Artem Bityutskiy Signed-off-by: Rafael J. Wysocki --- drivers/idle/intel_idle.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/idle/intel_idle.c b/drivers/idle/intel_idle.c index cfeb24d40d378..bb3d10099ba44 100644 --- a/drivers/idle/intel_idle.c +++ b/drivers/idle/intel_idle.c @@ -1430,6 +1430,7 @@ static const struct x86_cpu_id intel_idle_ids[] __initconst = { X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE_L, &idle_cpu_adl_l), X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE_N, &idle_cpu_adl_n), X86_MATCH_INTEL_FAM6_MODEL(SAPPHIRERAPIDS_X, &idle_cpu_spr), + X86_MATCH_INTEL_FAM6_MODEL(EMERALDRAPIDS_X, &idle_cpu_spr), X86_MATCH_INTEL_FAM6_MODEL(XEON_PHI_KNL, &idle_cpu_knl), X86_MATCH_INTEL_FAM6_MODEL(XEON_PHI_KNM, &idle_cpu_knl), X86_MATCH_INTEL_FAM6_MODEL(ATOM_GOLDMONT, &idle_cpu_bxt), @@ -1862,6 +1863,7 @@ static void __init intel_idle_init_cstates_icpu(struct cpuidle_driver *drv) skx_idle_state_table_update(); break; case INTEL_FAM6_SAPPHIRERAPIDS_X: + case INTEL_FAM6_EMERALDRAPIDS_X: spr_idle_state_table_update(); break; case INTEL_FAM6_ALDERLAKE: From 7787943a3a8ade6594a68db28c166adbb1d3708c Mon Sep 17 00:00:00 2001 From: Arnd Bergmann Date: Mon, 6 Feb 2023 20:33:06 +0100 Subject: [PATCH 09/13] cpuidle: add ARCH_SUSPEND_POSSIBLE dependencies Some ARMv4 processors don't support suspend, which leads to a build failure with the tegra and qualcomm cpuidle driver: WARNING: unmet direct dependencies detected for ARM_CPU_SUSPEND Depends on [n]: ARCH_SUSPEND_POSSIBLE [=n] Selected by [y]: - ARM_TEGRA_CPUIDLE [=y] && CPU_IDLE [=y] && (ARM [=y] || ARM64) && (ARCH_TEGRA [=n] || COMPILE_TEST [=y]) && !ARM64 && MMU [=y] arch/arm/kernel/sleep.o: in function `__cpu_suspend': (.text+0x68): undefined reference to `cpu_sa110_suspend_size' (.text+0x68): undefined reference to `cpu_fa526_suspend_size' Add an explicit dependency to make randconfig builds avoid this combination. Fixes: faae6c9f2e68 ("cpuidle: tegra: Enable compile testing") Fixes: a871be6b8eee ("cpuidle: Convert Qualcomm SPM driver to a generic CPUidle driver") Link: https://lore.kernel.org/all/20211013160125.772873-1-arnd@kernel.org/ Cc: All applicable Reviewed-by: Dmitry Osipenko Signed-off-by: Arnd Bergmann Acked-by: Thierry Reding Signed-off-by: Rafael J. Wysocki --- drivers/cpuidle/Kconfig.arm | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/cpuidle/Kconfig.arm b/drivers/cpuidle/Kconfig.arm index 747aa537389b9..f0714a32921e6 100644 --- a/drivers/cpuidle/Kconfig.arm +++ b/drivers/cpuidle/Kconfig.arm @@ -102,6 +102,7 @@ config ARM_MVEBU_V7_CPUIDLE config ARM_TEGRA_CPUIDLE bool "CPU Idle Driver for NVIDIA Tegra SoCs" depends on (ARCH_TEGRA || COMPILE_TEST) && !ARM64 && MMU + depends on ARCH_SUSPEND_POSSIBLE select ARCH_NEEDS_CPU_IDLE_COUPLED if SMP select ARM_CPU_SUSPEND help @@ -110,6 +111,7 @@ config ARM_TEGRA_CPUIDLE config ARM_QCOM_SPM_CPUIDLE bool "CPU Idle Driver for Qualcomm Subsystem Power Manager (SPM)" depends on (ARCH_QCOM || COMPILE_TEST) && !ARM64 && MMU + depends on ARCH_SUSPEND_POSSIBLE select ARM_CPU_SUSPEND select CPU_IDLE_MULTIPLE_DRIVERS select DT_IDLE_STATES From e898b07deb3ce801b21113979a05f4b3166c7cb3 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Thomas=20Wei=C3=9Fschuh?= Date: Tue, 7 Feb 2023 19:55:19 +0000 Subject: [PATCH 10/13] cpuidle: sysfs: make kobj_type structures constant MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Since commit ee6d3dd4ed48 ("driver core: make kobj_type constant.") the driver core allows the usage of const struct kobj_type. Take advantage of this to constify the structure definitions to prevent modification at runtime. Signed-off-by: Thomas Weißschuh Signed-off-by: Rafael J. Wysocki --- drivers/cpuidle/sysfs.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/cpuidle/sysfs.c b/drivers/cpuidle/sysfs.c index 2b496a53cbca1..48948b1717494 100644 --- a/drivers/cpuidle/sysfs.c +++ b/drivers/cpuidle/sysfs.c @@ -200,7 +200,7 @@ static void cpuidle_sysfs_release(struct kobject *kobj) complete(&kdev->kobj_unregister); } -static struct kobj_type ktype_cpuidle = { +static const struct kobj_type ktype_cpuidle = { .sysfs_ops = &cpuidle_sysfs_ops, .release = cpuidle_sysfs_release, }; @@ -447,7 +447,7 @@ static void cpuidle_state_sysfs_release(struct kobject *kobj) complete(&state_obj->kobj_unregister); } -static struct kobj_type ktype_state_cpuidle = { +static const struct kobj_type ktype_state_cpuidle = { .sysfs_ops = &cpuidle_state_sysfs_ops, .default_groups = cpuidle_state_default_groups, .release = cpuidle_state_sysfs_release, @@ -594,7 +594,7 @@ static struct attribute *cpuidle_driver_default_attrs[] = { }; ATTRIBUTE_GROUPS(cpuidle_driver_default); -static struct kobj_type ktype_driver_cpuidle = { +static const struct kobj_type ktype_driver_cpuidle = { .sysfs_ops = &cpuidle_driver_sysfs_ops, .default_groups = cpuidle_driver_default_groups, .release = cpuidle_driver_sysfs_release, From 41204a607679ccca7eabff9f2871b969d6ef2ce3 Mon Sep 17 00:00:00 2001 From: "Rafael J. Wysocki" Date: Tue, 7 Feb 2023 20:59:59 +0100 Subject: [PATCH 11/13] cpuidle: driver: Update microsecond values of state parameters as needed If the cpuidle driver provides the target residency and exit latency in nanoseconds, the corresponding values in microseconds need to be set to reflect the provided numbers in order for the sysfs interface to show them correctly, so make __cpuidle_driver_init() do that. Signed-off-by: Rafael J. Wysocki Tested-by: Artem Bityutskiy --- drivers/cpuidle/driver.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/cpuidle/driver.c b/drivers/cpuidle/driver.c index f70aa17e2a8e0..d9cda7f6ccb98 100644 --- a/drivers/cpuidle/driver.c +++ b/drivers/cpuidle/driver.c @@ -183,11 +183,15 @@ static void __cpuidle_driver_init(struct cpuidle_driver *drv) s->target_residency_ns = s->target_residency * NSEC_PER_USEC; else if (s->target_residency_ns < 0) s->target_residency_ns = 0; + else + s->target_residency = div_u64(s->target_residency_ns, NSEC_PER_USEC); if (s->exit_latency > 0) s->exit_latency_ns = s->exit_latency * NSEC_PER_USEC; else if (s->exit_latency_ns < 0) s->exit_latency_ns = 0; + else + s->exit_latency = div_u64(s->exit_latency_ns, NSEC_PER_USEC); } } From f9901f64536c39f699e6ed228c5e64e7e7ce8708 Mon Sep 17 00:00:00 2001 From: Krzysztof Kozlowski Date: Wed, 25 Jan 2023 12:34:18 +0100 Subject: [PATCH 12/13] cpuidle: psci: Do not suspend topology CPUs on PREEMPT_RT The runtime Power Management of CPU topology is not compatible with PREEMPT_RT: 1. Core cpuidle path disables IRQs. 2. Core cpuidle calls cpuidle-psci. 3. cpuidle-psci in __psci_enter_domain_idle_state() calls pm_runtime_put_sync_suspend() and pm_runtime_get_sync() which use spinlocks (which are sleeping on PREEMPT_RT). Deep sleep modes are not a priority of Realtime kernels because the latencies might become unpredictable. On the other hand the PSCI CPU idle power domain is a parent of other devices and power domain controllers, thus it cannot be simply skipped (e.g. on Qualcomm SM8250). Disable the idle callbacks in cpuidle-psci and mark the domain as always on. This is a trade-off between making PREEMPT_RT working and still having a proper power domain hierarchy in the system. Signed-off-by: Krzysztof Kozlowski Reviewed-by: Ulf Hansson Tested-by: Adrien Thierry Signed-off-by: Rafael J. Wysocki --- drivers/cpuidle/Kconfig.arm | 8 ++++++++ drivers/cpuidle/cpuidle-psci-domain.c | 7 +++++-- drivers/cpuidle/cpuidle-psci.c | 3 +++ 3 files changed, 16 insertions(+), 2 deletions(-) diff --git a/drivers/cpuidle/Kconfig.arm b/drivers/cpuidle/Kconfig.arm index f0714a32921e6..a1ee475d180da 100644 --- a/drivers/cpuidle/Kconfig.arm +++ b/drivers/cpuidle/Kconfig.arm @@ -24,6 +24,14 @@ config ARM_PSCI_CPUIDLE It provides an idle driver that is capable of detecting and managing idle states through the PSCI firmware interface. + The driver has limitations when used with PREEMPT_RT: + - If the idle states are described with the non-hierarchical layout, + all idle states are still available. + + - If the idle states are described with the hierarchical layout, + only the idle states defined per CPU are available, but not the ones + being shared among a group of CPUs (aka cluster idle states). + config ARM_PSCI_CPUIDLE_DOMAIN bool "PSCI CPU idle Domain" depends on ARM_PSCI_CPUIDLE diff --git a/drivers/cpuidle/cpuidle-psci-domain.c b/drivers/cpuidle/cpuidle-psci-domain.c index c80cf9ddabd8a..6ad2954948a5a 100644 --- a/drivers/cpuidle/cpuidle-psci-domain.c +++ b/drivers/cpuidle/cpuidle-psci-domain.c @@ -64,8 +64,11 @@ static int psci_pd_init(struct device_node *np, bool use_osi) pd->flags |= GENPD_FLAG_IRQ_SAFE | GENPD_FLAG_CPU_DOMAIN; - /* Allow power off when OSI has been successfully enabled. */ - if (use_osi) + /* + * Allow power off when OSI has been successfully enabled. + * PREEMPT_RT is not yet ready to enter domain idle states. + */ + if (use_osi && !IS_ENABLED(CONFIG_PREEMPT_RT)) pd->power_off = psci_pd_power_off; else pd->flags |= GENPD_FLAG_ALWAYS_ON; diff --git a/drivers/cpuidle/cpuidle-psci.c b/drivers/cpuidle/cpuidle-psci.c index 57bc3e3ae3912..68467a325dcb1 100644 --- a/drivers/cpuidle/cpuidle-psci.c +++ b/drivers/cpuidle/cpuidle-psci.c @@ -231,6 +231,9 @@ static int psci_dt_cpu_init_topology(struct cpuidle_driver *drv, if (!psci_has_osi_support()) return 0; + if (IS_ENABLED(CONFIG_PREEMPT_RT)) + return 0; + data->dev = psci_dt_attach_cpu(cpu); if (IS_ERR_OR_NULL(data->dev)) return PTR_ERR_OR_ZERO(data->dev); From 41a337b40e983db4f0e1602308109f2b93687a06 Mon Sep 17 00:00:00 2001 From: Richard Fitzgerald Date: Mon, 13 Feb 2023 16:50:05 +0000 Subject: [PATCH 13/13] PM: Add EXPORT macros for exporting PM functions Add a pair of macros for exporting functions only if CONFIG_PM is enabled. The naming follows the style of the standard EXPORT_SYMBOL_*() macros that they replace. Sometimes a module wants to export PM functions directly to other drivers, not a complete struct dev_pm_ops. A typical example is where a core library exports the generic (shared) implementation and calling code wraps one or more of these in custom code. Signed-off-by: Richard Fitzgerald Signed-off-by: Rafael J. Wysocki --- include/linux/pm.h | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/include/linux/pm.h b/include/linux/pm.h index 93cd34f00822c..035d9649eba48 100644 --- a/include/linux/pm.h +++ b/include/linux/pm.h @@ -379,9 +379,13 @@ const struct dev_pm_ops name = { \ const struct dev_pm_ops name; \ __EXPORT_SYMBOL(name, sec, ns); \ const struct dev_pm_ops name +#define EXPORT_PM_FN_GPL(name) EXPORT_SYMBOL_GPL(name) +#define EXPORT_PM_FN_NS_GPL(name, ns) EXPORT_SYMBOL_NS_GPL(name, ns) #else #define _EXPORT_DEV_PM_OPS(name, sec, ns) \ static __maybe_unused const struct dev_pm_ops __static_##name +#define EXPORT_PM_FN_GPL(name) +#define EXPORT_PM_FN_NS_GPL(name, ns) #endif #define EXPORT_DEV_PM_OPS(name) _EXPORT_DEV_PM_OPS(name, "", "")