Skip to content

Commit

Permalink
perf, x86: Don't reset the LBR as frequently
Browse files Browse the repository at this point in the history
If we reset the LBR on each first counter, simple counter rotation which
first deschedules all counters and then reschedules the new ones will
lead to LBR reset, even though we're still in the same task context.

Reduce this by not flushing on the first counter but only flushing on
different task contexts.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
Cc: paulus@samba.org
Cc: eranian@google.com
Cc: robert.richter@amd.com
Cc: fweisbec@gmail.com
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
  • Loading branch information
Peter Zijlstra authored and Ingo Molnar committed Mar 10, 2010
1 parent ad0e6cf commit b83a46e
Showing 1 changed file with 4 additions and 5 deletions.
9 changes: 4 additions & 5 deletions arch/x86/kernel/cpu/perf_event_intel_lbr.c
Original file line number Diff line number Diff line change
Expand Up @@ -72,12 +72,11 @@ static void intel_pmu_lbr_enable(struct perf_event *event)
WARN_ON_ONCE(cpuc->enabled);

/*
* Reset the LBR stack if this is the first LBR user or
* we changed task context so as to avoid data leaks.
* Reset the LBR stack if we changed task context to
* avoid data leaks.
*/

if (!cpuc->lbr_users ||
(event->ctx->task && cpuc->lbr_context != event->ctx)) {
if (event->ctx->task && cpuc->lbr_context != event->ctx) {
intel_pmu_lbr_reset();
cpuc->lbr_context = event->ctx;
}
Expand All @@ -93,7 +92,7 @@ static void intel_pmu_lbr_disable(struct perf_event *event)
return;

cpuc->lbr_users--;
BUG_ON(cpuc->lbr_users < 0);
WARN_ON_ONCE(cpuc->lbr_users < 0);

if (cpuc->enabled && !cpuc->lbr_users)
__intel_pmu_lbr_disable();
Expand Down

0 comments on commit b83a46e

Please sign in to comment.