Skip to content

Commit

Permalink
perf_counter: Fix swcounter context invariance
Browse files Browse the repository at this point in the history
perf_swcounter_is_counting() uses a lock, which means we cannot
use swcounters from NMI or when holding that particular lock,
this is unintended.

The below removes the lock, this opens up race window, but not
worse than the swcounters already experience due to RCU
traversal of the context in perf_swcounter_ctx_event().

This also fixes the hard lockups while opening a lockdep
tracepoint counter.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: stephane eranian <eranian@googlemail.com>
Cc: Corey J Ashford <cjashfor@us.ibm.com>
LKML-Reference: <1250149915.10001.66.camel@twins>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
  • Loading branch information
Peter Zijlstra authored and Ingo Molnar committed Aug 13, 2009
1 parent 8fd101f commit bcfc260
Showing 1 changed file with 18 additions and 26 deletions.
44 changes: 18 additions & 26 deletions kernel/perf_counter.c
Original file line number Diff line number Diff line change
Expand Up @@ -3444,40 +3444,32 @@ static void perf_swcounter_add(struct perf_counter *counter, u64 nr,

static int perf_swcounter_is_counting(struct perf_counter *counter)
{
struct perf_counter_context *ctx;
unsigned long flags;
int count;

/*
* The counter is active, we're good!
*/
if (counter->state == PERF_COUNTER_STATE_ACTIVE)
return 1;

/*
* The counter is off/error, not counting.
*/
if (counter->state != PERF_COUNTER_STATE_INACTIVE)
return 0;

/*
* If the counter is inactive, it could be just because
* its task is scheduled out, or because it's in a group
* which could not go on the PMU. We want to count in
* the first case but not the second. If the context is
* currently active then an inactive software counter must
* be the second case. If it's not currently active then
* we need to know whether the counter was active when the
* context was last active, which we can determine by
* comparing counter->tstamp_stopped with ctx->time.
*
* We are within an RCU read-side critical section,
* which protects the existence of *ctx.
* The counter is inactive, if the context is active
* we're part of a group that didn't make it on the 'pmu',
* not counting.
*/
ctx = counter->ctx;
spin_lock_irqsave(&ctx->lock, flags);
count = 1;
/* Re-check state now we have the lock */
if (counter->state < PERF_COUNTER_STATE_INACTIVE ||
counter->ctx->is_active ||
counter->tstamp_stopped < ctx->time)
count = 0;
spin_unlock_irqrestore(&ctx->lock, flags);
return count;
if (counter->ctx->is_active)
return 0;

/*
* We're inactive and the context is too, this means the
* task is scheduled out, we're counting events that happen
* to us, like migration events.
*/
return 1;
}

static int perf_swcounter_match(struct perf_counter *counter,
Expand Down

0 comments on commit bcfc260

Please sign in to comment.