Skip to content

Commit

Permalink
timekeeping: Let timekeeping_cycles_to_ns() handle both under and ove…
Browse files Browse the repository at this point in the history
…rflow

For the case !CONFIG_CLOCKSOURCE_VALIDATE_LAST_CYCLE, forego overflow
protection in the range (mask << 1) < delta <= mask, and interpret it
always as an inconsistency between CPU clock values. That allows
slightly neater code, and it is on a slow path so has no effect on
performance.

Suggested-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lore.kernel.org/r/20240325064023.2997-19-adrian.hunter@intel.com
  • Loading branch information
Adrian Hunter authored and Thomas Gleixner committed Apr 8, 2024
1 parent fcf190c commit 135225a
Showing 1 changed file with 13 additions and 18 deletions.
31 changes: 13 additions & 18 deletions kernel/time/timekeeping.c
Original file line number Diff line number Diff line change
Expand Up @@ -266,17 +266,14 @@ static inline u64 timekeeping_debug_get_ns(const struct tk_read_base *tkr)
* Try to catch underflows by checking if we are seeing small
* mask-relative negative values.
*/
if (unlikely((~delta & mask) < (mask >> 3))) {
if (unlikely((~delta & mask) < (mask >> 3)))
tk->underflow_seen = 1;
now = last;
}

/* Cap delta value to the max_cycles values to avoid mult overflows */
if (unlikely(delta > max)) {
/* Check for multiplication overflows */
if (unlikely(delta > max))
tk->overflow_seen = 1;
now = last + max;
}

/* timekeeping_cycles_to_ns() handles both under and overflow */
return timekeeping_cycles_to_ns(tkr, now);
}
#else
Expand Down Expand Up @@ -375,19 +372,17 @@ static inline u64 timekeeping_cycles_to_ns(const struct tk_read_base *tkr, u64 c
u64 mask = tkr->mask, delta = (cycles - tkr->cycle_last) & mask;

/*
* This detects the case where the delta overflows the multiplication
* with tkr->mult.
* This detects both negative motion and the case where the delta
* overflows the multiplication with tkr->mult.
*/
if (unlikely(delta > tkr->clock->max_cycles)) {
if (IS_ENABLED(CONFIG_CLOCKSOURCE_VALIDATE_LAST_CYCLE)) {
/*
* Handle clocksource inconsistency between CPUs to prevent
* time from going backwards by checking for the MSB of the
* mask being set in the delta.
*/
if (unlikely(delta & ~(mask >> 1)))
return tkr->xtime_nsec >> tkr->shift;
}
/*
* Handle clocksource inconsistency between CPUs to prevent
* time from going backwards by checking for the MSB of the
* mask being set in the delta.
*/
if (delta & ~(mask >> 1))
return tkr->xtime_nsec >> tkr->shift;

return delta_to_ns_safe(tkr, delta);
}
Expand Down

0 comments on commit 135225a

Please sign in to comment.