From 6cc0359cb665b07f6b89ca258a5942e0806e5e2f Mon Sep 17 00:00:00 2001 From: Yury Norov Date: Sun, 30 Apr 2023 10:18:05 -0700 Subject: [PATCH] sched/topology: add for_each_numa_{,online}_cpu() macro for_each_cpu() is widely used in the kernel, and it's beneficial to create a NUMA-aware version of the macro to improve on node locality.. Recently added for_each_numa_hop_mask() works, but switching existing codebase to using it is not an easy process. New for_each_numa_cpu() is designed to be similar to the for_each_cpu(). It allows to convert existing code to NUMA-aware as simple as adding a hop iterator variable and passing it inside new macro. for_each_numa_cpu() takes care of the rest. At the moment, we have 2 users of NUMA-aware enumerators. One is Melanox's in-tree driver, and another is Intel's in-review driver: https://lore.kernel.org/lkml/20230216145455.661709-1-pawel.chmielewski@intel.com/ Both real-life examples follow the same pattern: for_each_numa_hop_mask(cpus, prev, node) { for_each_cpu_andnot(cpu, cpus, prev) { if (cnt++ == max_num) goto out; do_something(cpu); } prev = cpus; } With the new macro, it would look like this: for_each_numa_online_cpu(cpu, hop, node) { if (cnt++ == max_num) break; do_something(cpu); } Straight conversion of existing for_each_cpu() codebase to NUMA-aware version with for_each_numa_hop_mask() is difficult because it doesn't take a user-provided cpu mask, and eventually ends up with open-coded double loop. With for_each_numa_cpu() it shouldn't be a brainteaser. Consider the NUMA-ignorant example: cpumask_t cpus = get_mask(); int cnt = 0, cpu; for_each_cpu(cpu, cpus) { if (cnt++ == max_num) break; do_something(cpu); } Converting it to NUMA-aware version would be as simple as: cpumask_t cpus = get_mask(); int node = get_node(); int cnt = 0, hop, cpu; rcu_read_lock(); for_each_numa_cpu(cpu, hop, node, cpus) { if (cnt++ == max_num) break; do_something(cpu); } rcu_read_unlock(); The latter looks more verbose and avoids from open-coding that annoying double loop. Another advantage is that it works with a 'hop' parameter with the clear meaning of NUMA distance, and doesn't make people not familiar to enumerator internals bothering with current and previous masks machinery. Signed-off-by: Yury Norov --- include/linux/topology.h | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/include/linux/topology.h b/include/linux/topology.h index da92fea385858..7d878f5f35cfb 100644 --- a/include/linux/topology.h +++ b/include/linux/topology.h @@ -291,4 +291,28 @@ sched_numa_hop_mask(unsigned int node, unsigned int hops) !IS_ERR_OR_NULL(mask); \ __hops++) +/** + * for_each_numa_cpu - iterate over cpus in increasing order taking into account + * NUMA distances from a given node. + * @cpu: the (optionally unsigned) integer iterator + * @hop: the iterator variable for nodes, i.e. proximity order to the @node + * @node: the NUMA node to start the search from. + * @mask: the cpumask pointer + * + * Where considered as a replacement to for_each_cpu(), the following should be + * taken into consideration: + * - Only accessible (i.e. online) CPUs are enumerated. + * - CPUs enumeration may not be a monotonic increasing sequence; + * + * rcu_lock must be held; + */ +#define for_each_numa_cpu(cpu, hop, node, mask) \ + for ((cpu) = 0, (hop) = 0; \ + (cpu) = sched_numa_find_next_cpu((mask), (cpu), (node), &(hop)),\ + (cpu) < nr_cpu_ids; \ + (cpu)++) + +#define for_each_numa_online_cpu(cpu, hop, node) \ + for_each_numa_cpu(cpu, hop, node, cpu_online_mask) + #endif /* _LINUX_TOPOLOGY_H */