Skip to content

Commit

Permalink
perf/core: Explain perf_sched_mutex
Browse files Browse the repository at this point in the history
To clarify why atomic_inc_return(&perf_sched_events) is not sufficient and
a mutex is needed to order static branch enabling vs the atomic counter
increment, this adds a comment with a short explanation.

Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20170829140103.6563-1-alexander.shishkin@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
  • Loading branch information
Alexander Shishkin authored and Ingo Molnar committed Sep 29, 2017
1 parent 4c4de7d commit 5bce9db
Showing 1 changed file with 5 additions and 0 deletions.
5 changes: 5 additions & 0 deletions kernel/events/core.c
Original file line number Diff line number Diff line change
Expand Up @@ -9394,6 +9394,11 @@ static void account_event(struct perf_event *event)
inc = true;

if (inc) {
/*
* We need the mutex here because static_branch_enable()
* must complete *before* the perf_sched_count increment
* becomes visible.
*/
if (atomic_inc_not_zero(&perf_sched_count))
goto enabled;

Expand Down

0 comments on commit 5bce9db

Please sign in to comment.