Skip to content

Commit

Permalink
---
Browse files Browse the repository at this point in the history
yaml
---
r: 117006
b: refs/heads/master
c: 5f86515
h: refs/heads/master
v: v3
  • Loading branch information
Lai Jiangshan authored and Ingo Molnar committed Oct 21, 2008
1 parent 9b449c7 commit bee8dc7
Show file tree
Hide file tree
Showing 2 changed files with 11 additions and 10 deletions.
2 changes: 1 addition & 1 deletion [refs]
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
---
refs/heads/master: 8cf7d362c0dc2cfda2146d184eedc32a530c8020
refs/heads/master: 5f86515158ca86182c1dbecd546f1848121ba135
19 changes: 10 additions & 9 deletions trunk/kernel/rcupdate.c
Original file line number Diff line number Diff line change
Expand Up @@ -119,18 +119,19 @@ static void _rcu_barrier(enum rcu_barrier type)
/* Take cpucontrol mutex to protect against CPU hotplug */
mutex_lock(&rcu_barrier_mutex);
init_completion(&rcu_barrier_completion);
atomic_set(&rcu_barrier_cpu_count, 0);
/*
* The queueing of callbacks in all CPUs must be atomic with
* respect to RCU, otherwise one CPU may queue a callback,
* wait for a grace period, decrement barrier count and call
* complete(), while other CPUs have not yet queued anything.
* So, we need to make sure that grace periods cannot complete
* until all the callbacks are queued.
* Initialize rcu_barrier_cpu_count to 1, then invoke
* rcu_barrier_func() on each CPU, so that each CPU also has
* incremented rcu_barrier_cpu_count. Only then is it safe to
* decrement rcu_barrier_cpu_count -- otherwise the first CPU
* might complete its grace period before all of the other CPUs
* did their increment, causing this function to return too
* early.
*/
rcu_read_lock();
atomic_set(&rcu_barrier_cpu_count, 1);
on_each_cpu(rcu_barrier_func, (void *)type, 1);
rcu_read_unlock();
if (atomic_dec_and_test(&rcu_barrier_cpu_count))
complete(&rcu_barrier_completion);
wait_for_completion(&rcu_barrier_completion);
mutex_unlock(&rcu_barrier_mutex);
}
Expand Down

0 comments on commit bee8dc7

Please sign in to comment.