Skip to content

Commit

Permalink
drivers/perf: arm-pmu: Handle per-interrupt affinity mask
Browse files Browse the repository at this point in the history
On a big-little system, PMUs can be wired to CPUs using per CPU
interrups (PPI). In this case, it is important to make sure that
the enable/disable do happen on the right set of CPUs.

So instead of relying on the interrupt-affinity property, we can
use the actual percpu affinity that DT exposes as part of the
interrupt specifier. The DT binding is also updated to reflect
the fact that the interrupt-affinity property shouldn't be used
in that case.

Acked-by: Rob Herring <robh@kernel.org>
Tested-by: Caesar Wang <wxt@rock-chips.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
  • Loading branch information
Marc Zyngier authored and Catalin Marinas committed Jul 8, 2016
1 parent 90f777b commit 19a469a
Show file tree
Hide file tree
Showing 2 changed files with 25 additions and 6 deletions.
4 changes: 3 additions & 1 deletion Documentation/devicetree/bindings/arm/pmu.txt
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,9 @@ Optional properties:
When using a PPI, specifies a list of phandles to CPU
nodes corresponding to the set of CPUs which have
a PMU of this type signalling the PPI listed in the
interrupts property.
interrupts property, unless this is already specified
by the PPI interrupt specifier itself (in which case
the interrupt-affinity property shouldn't be present).

This property should be present when there is more than
a single SPI.
Expand Down
27 changes: 22 additions & 5 deletions drivers/perf/arm_pmu.c
Original file line number Diff line number Diff line change
Expand Up @@ -603,7 +603,8 @@ static void cpu_pmu_free_irq(struct arm_pmu *cpu_pmu)

irq = platform_get_irq(pmu_device, 0);
if (irq >= 0 && irq_is_percpu(irq)) {
on_each_cpu(cpu_pmu_disable_percpu_irq, &irq, 1);
on_each_cpu_mask(&cpu_pmu->supported_cpus,
cpu_pmu_disable_percpu_irq, &irq, 1);
free_percpu_irq(irq, &hw_events->percpu_pmu);
} else {
for (i = 0; i < irqs; ++i) {
Expand Down Expand Up @@ -645,7 +646,9 @@ static int cpu_pmu_request_irq(struct arm_pmu *cpu_pmu, irq_handler_t handler)
irq);
return err;
}
on_each_cpu(cpu_pmu_enable_percpu_irq, &irq, 1);

on_each_cpu_mask(&cpu_pmu->supported_cpus,
cpu_pmu_enable_percpu_irq, &irq, 1);
} else {
for (i = 0; i < irqs; ++i) {
int cpu = i;
Expand Down Expand Up @@ -961,9 +964,23 @@ static int of_pmu_irq_cfg(struct arm_pmu *pmu)
i++;
} while (1);

/* If we didn't manage to parse anything, claim to support all CPUs */
if (cpumask_weight(&pmu->supported_cpus) == 0)
cpumask_setall(&pmu->supported_cpus);
/* If we didn't manage to parse anything, try the interrupt affinity */
if (cpumask_weight(&pmu->supported_cpus) == 0) {
if (!using_spi) {
/* If using PPIs, check the affinity of the partition */
int ret, irq;

irq = platform_get_irq(pdev, 0);
ret = irq_get_percpu_devid_partition(irq, &pmu->supported_cpus);
if (ret) {
kfree(irqs);
return ret;
}
} else {
/* Otherwise default to all CPUs */
cpumask_setall(&pmu->supported_cpus);
}
}

/* If we matched up the IRQ affinities, use them to route the SPIs */
if (using_spi && i == pdev->num_resources)
Expand Down

0 comments on commit 19a469a

Please sign in to comment.