Skip to content

Commit

Permalink
workqueue: Use cpu_possible_mask instead of cpu_active_mask to break …
Browse files Browse the repository at this point in the history
…affinity

The scheduler won't break affinity for us any more, and we should
"emulate" the same behavior when the scheduler breaks affinity for
us.  The behavior is "changing the cpumask to cpu_possible_mask".

And there might be some other CPUs online later while the worker is
still running with the pending work items.  The worker should be allowed
to use the later online CPUs as before and process the work items ASAP.
If we use cpu_active_mask here, we can't achieve this goal but
using cpu_possible_mask can.

Fixes: 0624973 ("workqueue: Manually break affinity on hotplug")
Signed-off-by: Lai Jiangshan <laijs@linux.alibaba.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>
Acked-by: Tejun Heo <tj@kernel.org>
Tested-by: Paul E. McKenney <paulmck@kernel.org>
Tested-by: Valentin Schneider <valentin.schneider@arm.com>
Link: https://lkml.kernel.org/r/20210111152638.2417-4-jiangshanlai@gmail.com
  • Loading branch information
Lai Jiangshan authored and Peter Zijlstra committed Jan 22, 2021
1 parent 36c6e17 commit 547a77d
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion kernel/workqueue.c
Original file line number Diff line number Diff line change
Expand Up @@ -4920,7 +4920,7 @@ static void unbind_workers(int cpu)
raw_spin_unlock_irq(&pool->lock);

for_each_pool_worker(worker, pool)
WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, cpu_active_mask) < 0);
WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, cpu_possible_mask) < 0);

mutex_unlock(&wq_pool_attach_mutex);

Expand Down

0 comments on commit 547a77d

Please sign in to comment.