Skip to content

Commit

Permalink
x86, perfcounters: read out MSR_CORE_PERF_GLOBAL_STATUS with counters…
Browse files Browse the repository at this point in the history
… disabled

Impact: make perfcounter NMI and IRQ sequence more robust

Make __smp_perf_counter_interrupt() a bit more conservative: first disable
all counters, then read out the status. Most invocations are because there
are real events, so there's no performance impact.

Code flow gets a bit simpler as well this way.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
  • Loading branch information
Ingo Molnar committed Dec 8, 2008
1 parent 241771e commit 87b9cf4
Showing 1 changed file with 5 additions and 7 deletions.
12 changes: 5 additions & 7 deletions arch/x86/kernel/cpu/perf_counter.c
Original file line number Diff line number Diff line change
Expand Up @@ -383,18 +383,16 @@ static void __smp_perf_counter_interrupt(struct pt_regs *regs, int nmi)
struct cpu_hw_counters *cpuc;
u64 ack, status;

rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, status);
if (!status) {
ack_APIC_irq();
return;
}

/* Disable counters globally */
wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0, 0);
ack_APIC_irq();

cpuc = &per_cpu(cpu_hw_counters, cpu);

rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, status);
if (!status)
goto out;

again:
ack = status;
for_each_bit(bit, (unsigned long *) &status, nr_hw_counters) {
Expand Down Expand Up @@ -440,7 +438,7 @@ static void __smp_perf_counter_interrupt(struct pt_regs *regs, int nmi)
rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, status);
if (status)
goto again;

out:
/*
* Do not reenable when global enable is off:
*/
Expand Down

0 comments on commit 87b9cf4

Please sign in to comment.