Skip to content

Commit

Permalink
sched_ext: Relocate check_hotplug_seq() call in scx_ops_enable()
Browse files Browse the repository at this point in the history
check_hotplug_seq() is used to detect CPU hotplug event which occurred while
the BPF scheduler is being loaded so that initialization can be retried if
CPU hotplug events take place before the CPU hotplug callbacks are online.

As such, the best place to call it is in the same cpu_read_lock() section
that enables the CPU hotplug ops. Currently, it is called in the next
cpus_read_lock() block in scx_ops_enable(). The side effect of this
placement is a small window in which hotplug sequence detection can trigger
unnecessarily, which isn't critical.

Move check_hotplug_seq() invocation to the same cpus_read_lock() block as
the hotplug operation enablement to close the window and get the invocation
out of the way for planned locking updates.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: David Vernet <void@manifault.com>
  • Loading branch information
Tejun Heo committed Sep 27, 2024
1 parent 6f34d8d commit 1bbcfe6
Showing 1 changed file with 1 addition and 2 deletions.
3 changes: 1 addition & 2 deletions kernel/sched/ext.c
Original file line number Diff line number Diff line change
Expand Up @@ -5050,6 +5050,7 @@ static int scx_ops_enable(struct sched_ext_ops *ops, struct bpf_link *link)
if (((void (**)(void))ops)[i])
static_branch_enable_cpuslocked(&scx_has_op[i]);

check_hotplug_seq(ops);
cpus_read_unlock();

ret = validate_ops(ops);
Expand Down Expand Up @@ -5098,8 +5099,6 @@ static int scx_ops_enable(struct sched_ext_ops *ops, struct bpf_link *link)
cpus_read_lock();
scx_cgroup_lock();

check_hotplug_seq(ops);

for (i = SCX_OPI_NORMAL_BEGIN; i < SCX_OPI_NORMAL_END; i++)
if (((void (**)(void))ops)[i])
static_branch_enable_cpuslocked(&scx_has_op[i]);
Expand Down

0 comments on commit 1bbcfe6

Please sign in to comment.