Skip to content

Commit

Permalink
---
Browse files Browse the repository at this point in the history
yaml
---
r: 338839
b: refs/heads/master
c: aac1cda
h: refs/heads/master
i:
  338837: aabe423
  338835: 2fea834
  338831: a0d0faa
v: v3
  • Loading branch information
Paul E. McKenney committed Nov 16, 2012
1 parent 4127865 commit 31ee53b
Show file tree
Hide file tree
Showing 22 changed files with 741 additions and 478 deletions.
2 changes: 1 addition & 1 deletion [refs]
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
---
refs/heads/master: af71befa282ddf51c09509978abe1e83de8fe7eb
refs/heads/master: aac1cda34b84a9411d6b8d18c3658f094c834911
2 changes: 1 addition & 1 deletion trunk/Documentation/RCU/RTFP.txt
Original file line number Diff line number Diff line change
Expand Up @@ -186,7 +186,7 @@ Bibtex Entries

@article{Kung80
,author="H. T. Kung and Q. Lehman"
,title="Concurrent Maintenance of Binary Search Trees"
,title="Concurrent Manipulation of Binary Search Trees"
,Year="1980"
,Month="September"
,journal="ACM Transactions on Database Systems"
Expand Down
17 changes: 8 additions & 9 deletions trunk/Documentation/RCU/checklist.txt
Original file line number Diff line number Diff line change
Expand Up @@ -271,15 +271,14 @@ over a rather long period of time, but improvements are always welcome!
The same cautions apply to call_rcu_bh() and call_rcu_sched().

9. All RCU list-traversal primitives, which include
rcu_dereference(), list_for_each_entry_rcu(),
list_for_each_continue_rcu(), and list_for_each_safe_rcu(),
must be either within an RCU read-side critical section or
must be protected by appropriate update-side locks. RCU
read-side critical sections are delimited by rcu_read_lock()
and rcu_read_unlock(), or by similar primitives such as
rcu_read_lock_bh() and rcu_read_unlock_bh(), in which case
the matching rcu_dereference() primitive must be used in order
to keep lockdep happy, in this case, rcu_dereference_bh().
rcu_dereference(), list_for_each_entry_rcu(), and
list_for_each_safe_rcu(), must be either within an RCU read-side
critical section or must be protected by appropriate update-side
locks. RCU read-side critical sections are delimited by
rcu_read_lock() and rcu_read_unlock(), or by similar primitives
such as rcu_read_lock_bh() and rcu_read_unlock_bh(), in which
case the matching rcu_dereference() primitive must be used in
order to keep lockdep happy, in this case, rcu_dereference_bh().

The reason that it is permissible to use RCU list-traversal
primitives when the update-side lock is held is that doing so
Expand Down
2 changes: 1 addition & 1 deletion trunk/Documentation/RCU/listRCU.txt
Original file line number Diff line number Diff line change
Expand Up @@ -205,7 +205,7 @@ RCU ("read-copy update") its name. The RCU code is as follows:
audit_copy_rule(&ne->rule, &e->rule);
ne->rule.action = newaction;
ne->rule.file_count = newfield_count;
list_replace_rcu(e, ne);
list_replace_rcu(&e->list, &ne->list);
call_rcu(&e->rcu, audit_free_rule);
return 0;
}
Expand Down
61 changes: 59 additions & 2 deletions trunk/Documentation/RCU/rcuref.txt
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ release_referenced() delete()
{ {
... write_lock(&list_lock);
atomic_dec(&el->rc, relfunc) ...
... delete_element
... remove_element
} write_unlock(&list_lock);
...
if (atomic_dec_and_test(&el->rc))
Expand Down Expand Up @@ -52,7 +52,7 @@ release_referenced() delete()
{ {
... spin_lock(&list_lock);
if (atomic_dec_and_test(&el->rc)) ...
call_rcu(&el->head, el_free); delete_element
call_rcu(&el->head, el_free); remove_element
... spin_unlock(&list_lock);
} ...
if (atomic_dec_and_test(&el->rc))
Expand All @@ -64,3 +64,60 @@ Sometimes, a reference to the element needs to be obtained in the
update (write) stream. In such cases, atomic_inc_not_zero() might be
overkill, since we hold the update-side spinlock. One might instead
use atomic_inc() in such cases.

It is not always convenient to deal with "FAIL" in the
search_and_reference() code path. In such cases, the
atomic_dec_and_test() may be moved from delete() to el_free()
as follows:

1. 2.
add() search_and_reference()
{ {
alloc_object rcu_read_lock();
... search_for_element
atomic_set(&el->rc, 1); atomic_inc(&el->rc);
spin_lock(&list_lock); ...

add_element rcu_read_unlock();
... }
spin_unlock(&list_lock); 4.
} delete()
3. {
release_referenced() spin_lock(&list_lock);
{ ...
... remove_element
if (atomic_dec_and_test(&el->rc)) spin_unlock(&list_lock);
kfree(el); ...
... call_rcu(&el->head, el_free);
} ...
5. }
void el_free(struct rcu_head *rhp)
{
release_referenced();
}

The key point is that the initial reference added by add() is not removed
until after a grace period has elapsed following removal. This means that
search_and_reference() cannot find this element, which means that the value
of el->rc cannot increase. Thus, once it reaches zero, there are no
readers that can or ever will be able to reference the element. The
element can therefore safely be freed. This in turn guarantees that if
any reader finds the element, that reader may safely acquire a reference
without checking the value of the reference counter.

In cases where delete() can sleep, synchronize_rcu() can be called from
delete(), so that el_free() can be subsumed into delete as follows:

4.
delete()
{
spin_lock(&list_lock);
...
remove_element
spin_unlock(&list_lock);
...
synchronize_rcu();
if (atomic_dec_and_test(&el->rc))
kfree(el);
...
}
Loading

0 comments on commit 31ee53b

Please sign in to comment.