Skip to content

Commit

Permalink
workqueue: fix race condition in schedule_on_each_cpu()
Browse files Browse the repository at this point in the history
Commit 65a6446 ("HWPOISON: Allow
schedule_on_each_cpu() from keventd") which allows schedule_on_each_cpu()
to be called from keventd added a race condition.  schedule_on_each_cpu()
may race with cpu hotplug and end up executing the function twice on a
cpu.

Fix it by moving direct execution into the section protected with
get/put_online_cpus().  While at it, update code such that direct
execution is done after works have been scheduled for all other cpus and
drop unnecessary cpu != orig test from flush loop.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Andi Kleen <ak@linux.intel.com>
Acked-by: Oleg Nesterov <oleg@redhat.com>
Cc: Ingo Molnar <mingo@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
  • Loading branch information
Tejun Heo authored and Linus Torvalds committed Nov 18, 2009
1 parent e131933 commit 9398180
Showing 1 changed file with 13 additions and 15 deletions.
28 changes: 13 additions & 15 deletions kernel/workqueue.c
Original file line number Diff line number Diff line change
Expand Up @@ -692,31 +692,29 @@ int schedule_on_each_cpu(work_func_t func)
if (!works)
return -ENOMEM;

get_online_cpus();

/*
* when running in keventd don't schedule a work item on itself.
* Can just call directly because the work queue is already bound.
* This also is faster.
* Make this a generic parameter for other workqueues?
* When running in keventd don't schedule a work item on
* itself. Can just call directly because the work queue is
* already bound. This also is faster.
*/
if (current_is_keventd()) {
if (current_is_keventd())
orig = raw_smp_processor_id();
INIT_WORK(per_cpu_ptr(works, orig), func);
func(per_cpu_ptr(works, orig));
}

get_online_cpus();
for_each_online_cpu(cpu) {
struct work_struct *work = per_cpu_ptr(works, cpu);

if (cpu == orig)
continue;
INIT_WORK(work, func);
schedule_work_on(cpu, work);
}
for_each_online_cpu(cpu) {
if (cpu != orig)
flush_work(per_cpu_ptr(works, cpu));
schedule_work_on(cpu, work);
}
if (orig >= 0)
func(per_cpu_ptr(works, orig));

for_each_online_cpu(cpu)
flush_work(per_cpu_ptr(works, cpu));

put_online_cpus();
free_percpu(works);
return 0;
Expand Down

0 comments on commit 9398180

Please sign in to comment.