Skip to content

Commit

Permalink
perf_counter, x86: speed up the scheduling fast-path
Browse files Browse the repository at this point in the history
We have to set up the LVT entry only at counter init time, not at
every switch-in time.

There's friction between NMI and non-NMI use here - we'll probably
remove the per counter configurability of it - but until then, dont
slow down things ...

[ Impact: micro-optimization ]

Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Srivatsa Vaddagiri <vatsa@in.ibm.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
  • Loading branch information
Ingo Molnar committed May 18, 2009
1 parent c0daaf3 commit b68f1d2
Showing 1 changed file with 2 additions and 3 deletions.
5 changes: 2 additions & 3 deletions arch/x86/kernel/cpu/perf_counter.c
Original file line number Diff line number Diff line change
Expand Up @@ -285,6 +285,7 @@ static int __hw_perf_counter_init(struct perf_counter *counter)
return -EACCES;
hwc->nmi = 1;
}
perf_counters_lapic_init(hwc->nmi);

if (!hwc->irq_period)
hwc->irq_period = x86_pmu.max_period;
Expand Down Expand Up @@ -603,8 +604,6 @@ static int x86_pmu_enable(struct perf_counter *counter)
hwc->counter_base = x86_pmu.perfctr;
}

perf_counters_lapic_init(hwc->nmi);

x86_pmu.disable(hwc, idx);

cpuc->counters[idx] = counter;
Expand Down Expand Up @@ -1054,7 +1053,7 @@ void __init init_hw_perf_counters(void)

pr_info("... counter mask: %016Lx\n", perf_counter_mask);

perf_counters_lapic_init(0);
perf_counters_lapic_init(1);
register_die_notifier(&perf_counter_nmi_notifier);
}

Expand Down

0 comments on commit b68f1d2

Please sign in to comment.