Skip to content

Commit

Permalink
perf: Specialize perf_event_exit_task()
Browse files Browse the repository at this point in the history
The perf_remove_from_context() usage in __perf_event_exit_task() is
different from the other usages in that this site has already
detached and scheduled out the task context.

This will stand in the way of stronger assertions checking the (task)
context scheduling invariants.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
  • Loading branch information
Peter Zijlstra authored and Ingo Molnar committed Jan 21, 2016
1 parent 39a4364 commit 32132a3
Showing 1 changed file with 11 additions and 7 deletions.
18 changes: 11 additions & 7 deletions kernel/events/core.c
Original file line number Diff line number Diff line change
Expand Up @@ -8726,7 +8726,13 @@ __perf_event_exit_task(struct perf_event *child_event,
* Do destroy all inherited groups, we don't care about those
* and being thorough is better.
*/
perf_remove_from_context(child_event, !!child_event->parent);
raw_spin_lock_irq(&child_ctx->lock);
WARN_ON_ONCE(child_ctx->is_active);

if (!!child_event->parent)
perf_group_detach(child_event);
list_del_event(child_event, child_ctx);
raw_spin_unlock_irq(&child_ctx->lock);

/*
* It can happen that the parent exits first, and has events
Expand All @@ -8746,17 +8752,15 @@ static void perf_event_exit_task_context(struct task_struct *child, int ctxn)
{
struct perf_event *child_event, *next;
struct perf_event_context *child_ctx, *clone_ctx = NULL;
unsigned long flags;

if (likely(!child->perf_event_ctxp[ctxn]))
return;

local_irq_save(flags);
local_irq_disable();
WARN_ON_ONCE(child != current);
/*
* We can't reschedule here because interrupts are disabled,
* and either child is current or it is a task that can't be
* scheduled, so we are now safe from rescheduling changing
* our context.
* and child must be current.
*/
child_ctx = rcu_dereference_raw(child->perf_event_ctxp[ctxn]);

Expand All @@ -8776,7 +8780,7 @@ static void perf_event_exit_task_context(struct task_struct *child, int ctxn)
*/
clone_ctx = unclone_ctx(child_ctx);
update_context_time(child_ctx);
raw_spin_unlock_irqrestore(&child_ctx->lock, flags);
raw_spin_unlock_irq(&child_ctx->lock);

if (clone_ctx)
put_ctx(clone_ctx);
Expand Down

0 comments on commit 32132a3

Please sign in to comment.