Skip to content

Commit

Permalink
clockevents: Shutdown and unregister current clockevents at CPUHP_AP_…
Browse files Browse the repository at this point in the history
…TICK_DYING

The way the clockevent devices are finally stopped while a CPU is
offlining is currently chaotic. The layout being by order:

1) tick_sched_timer_dying() stops the tick and the underlying clockevent
  but only for oneshot case. The periodic tick and its related
  clockevent still runs.

2) tick_broadcast_offline() detaches and stops the per-cpu oneshot
  broadcast and append it to the released list.

3) Some individual clockevent drivers stop the clockevents (a second time if
  the tick is oneshot)

4) Once the CPU is dead, a control CPU remotely detaches and stops
  (a 3rd time if oneshot mode) the CPU clockevent and adds it to the
  released list.

5) The released list containing the broadcast device released on step 2)
   and the remotely detached clockevent from step 4) are unregistered.

These random events can be factorized if the current clockevent is
detached and stopped by the dying CPU at the generic layer, that is
from the dying CPU:

a) Stop the tick
b) Stop/detach the underlying per-cpu oneshot broadcast clockevent
c) Stop/detach the underlying clockevent
d) Release / unregister the clockevents from b) and c)
e) Release / unregister the remaining clockevents from the dying CPU.
   This part could be performed by the dying CPU

This way the drivers and the tick layer don't need to care about
clockevent operations during cpuhotplug down. This also unifies the tick
behaviour on offline CPUs between oneshot and periodic modes, avoiding
offline ticks altogether for sanity.

Adopt the simplification.

[ tglx: Remove the WARN_ON() in clockevents_register_device() as that
  	is called from an upcoming CPU before the CPU is marked online ]

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lore.kernel.org/all/20241029125451.54574-3-frederic@kernel.org
  • Loading branch information
Frederic Weisbecker authored and Thomas Gleixner committed Oct 31, 2024
1 parent 17a8945 commit 3b1596a
Show file tree
Hide file tree
Showing 4 changed files with 12 additions and 25 deletions.
2 changes: 0 additions & 2 deletions include/linux/tick.h
Original file line number Diff line number Diff line change
Expand Up @@ -20,12 +20,10 @@ extern void __init tick_init(void);
extern void tick_suspend_local(void);
/* Should be core only, but XEN resume magic and ARM BL switcher require it */
extern void tick_resume_local(void);
extern void tick_cleanup_dead_cpu(int cpu);
#else /* CONFIG_GENERIC_CLOCKEVENTS */
static inline void tick_init(void) { }
static inline void tick_suspend_local(void) { }
static inline void tick_resume_local(void) { }
static inline void tick_cleanup_dead_cpu(int cpu) { }
#endif /* !CONFIG_GENERIC_CLOCKEVENTS */

#if defined(CONFIG_GENERIC_CLOCKEVENTS) && defined(CONFIG_HOTPLUG_CPU)
Expand Down
2 changes: 0 additions & 2 deletions kernel/cpu.c
Original file line number Diff line number Diff line change
Expand Up @@ -1338,8 +1338,6 @@ static int takedown_cpu(unsigned int cpu)

cpuhp_bp_sync_dead(cpu);

tick_cleanup_dead_cpu(cpu);

/*
* Callbacks must be re-integrated right away to the RCU state machine.
* Otherwise an RCU callback could block a further teardown function
Expand Down
30 changes: 11 additions & 19 deletions kernel/time/clockevents.c
Original file line number Diff line number Diff line change
Expand Up @@ -618,39 +618,30 @@ void clockevents_resume(void)

#ifdef CONFIG_HOTPLUG_CPU

# ifdef CONFIG_GENERIC_CLOCKEVENTS_BROADCAST
/**
* tick_offline_cpu - Take CPU out of the broadcast mechanism
* tick_offline_cpu - Shutdown all clock events related
* to this CPU and take it out of the
* broadcast mechanism.
* @cpu: The outgoing CPU
*
* Called on the outgoing CPU after it took itself offline.
* Called by the dying CPU during teardown.
*/
void tick_offline_cpu(unsigned int cpu)
{
raw_spin_lock(&clockevents_lock);
tick_broadcast_offline(cpu);
raw_spin_unlock(&clockevents_lock);
}
# endif

/**
* tick_cleanup_dead_cpu - Cleanup the tick and clockevents of a dead cpu
* @cpu: The dead CPU
*/
void tick_cleanup_dead_cpu(int cpu)
{
struct clock_event_device *dev, *tmp;
unsigned long flags;

raw_spin_lock_irqsave(&clockevents_lock, flags);
raw_spin_lock(&clockevents_lock);

tick_broadcast_offline(cpu);
tick_shutdown(cpu);

/*
* Unregister the clock event devices which were
* released from the users in the notify chain.
* released above.
*/
list_for_each_entry_safe(dev, tmp, &clockevents_released, list)
list_del(&dev->list);

/*
* Now check whether the CPU has left unused per cpu devices
*/
Expand All @@ -662,7 +653,8 @@ void tick_cleanup_dead_cpu(int cpu)
list_del(&dev->list);
}
}
raw_spin_unlock_irqrestore(&clockevents_lock, flags);

raw_spin_unlock(&clockevents_lock);
}
#endif

Expand Down
3 changes: 1 addition & 2 deletions kernel/time/tick-internal.h
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ extern int tick_do_timer_cpu __read_mostly;
extern void tick_setup_periodic(struct clock_event_device *dev, int broadcast);
extern void tick_handle_periodic(struct clock_event_device *dev);
extern void tick_check_new_device(struct clock_event_device *dev);
extern void tick_offline_cpu(unsigned int cpu);
extern void tick_shutdown(unsigned int cpu);
extern void tick_suspend(void);
extern void tick_resume(void);
Expand Down Expand Up @@ -142,10 +143,8 @@ static inline bool tick_broadcast_oneshot_available(void) { return tick_oneshot_
#endif /* !(BROADCAST && ONESHOT) */

#if defined(CONFIG_GENERIC_CLOCKEVENTS_BROADCAST) && defined(CONFIG_HOTPLUG_CPU)
extern void tick_offline_cpu(unsigned int cpu);
extern void tick_broadcast_offline(unsigned int cpu);
#else
static inline void tick_offline_cpu(unsigned int cpu) { }
static inline void tick_broadcast_offline(unsigned int cpu) { }
#endif

Expand Down

0 comments on commit 3b1596a

Please sign in to comment.