Skip to content

Commit

Permalink
rcu: Support reclaim for head-less object
Browse files Browse the repository at this point in the history
Update the kvfree_call_rcu() function with head-less support.
This allows RCU to reclaim objects without an embedded rcu_head.

tree-RCU:
We introduce two chains of arrays to store SLAB-backed and vmalloc
pointers, each.  Storage in either of these arrays does not require
embedding an rcu_head within the object.

Maintaining the arrays may become impossible due to high memory
pressure. For such cases there is an emergency path. Objects with
rcu_head inside are just queued on a backup rcu_head list. Later on
that list is drained. As for the head-less variant, as the current
context can sleep, the following emergency measures are applied:
   a) Synchronously wait until a grace period has elapsed.
   b) Call kvfree().

tiny-RCU:
For double argument calls, there are no new changes in behavior. For
single argument call, kvfree() is directly inlined on the current
stack after a synchronize_rcu() call. Note that for tiny-RCU, any
call to synchronize_rcu() is actually a quiescent state, therefore
it does nothing.

Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org>
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
Co-developed-by: Joel Fernandes (Google) <joel@joelfernandes.org>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
  • Loading branch information
Uladzislau Rezki (Sony) authored and Paul E. McKenney committed Jun 29, 2020
1 parent ce4dce1 commit 3042f83
Show file tree
Hide file tree
Showing 2 changed files with 60 additions and 3 deletions.
18 changes: 17 additions & 1 deletion include/linux/rcutiny.h
Original file line number Diff line number Diff line change
Expand Up @@ -34,9 +34,25 @@ static inline void synchronize_rcu_expedited(void)
synchronize_rcu();
}

/*
* Add one more declaration of kvfree() here. It is
* not so straight forward to just include <linux/mm.h>
* where it is defined due to getting many compile
* errors caused by that include.
*/
extern void kvfree(const void *addr);

static inline void kvfree_call_rcu(struct rcu_head *head, rcu_callback_t func)
{
call_rcu(head, func);
if (head) {
call_rcu(head, func);
return;
}

// kvfree_rcu(one_arg) call.
might_sleep();
synchronize_rcu();
kvfree((void *) func);
}

void rcu_qs(void);
Expand Down
45 changes: 43 additions & 2 deletions kernel/rcu/tree.c
Original file line number Diff line number Diff line change
Expand Up @@ -3314,6 +3314,13 @@ kvfree_call_rcu_add_ptr_to_bulk(struct kfree_rcu_cpu *krcp, void *ptr)
if (IS_ENABLED(CONFIG_PREEMPT_RT))
return false;

/*
* NOTE: For one argument of kvfree_rcu() we can
* drop the lock and get the page in sleepable
* context. That would allow to maintain an array
* for the CONFIG_PREEMPT_RT as well if no cached
* pages are available.
*/
bnode = (struct kvfree_rcu_bulk_data *)
__get_free_page(GFP_NOWAIT | __GFP_NOWARN);
}
Expand Down Expand Up @@ -3353,27 +3360,50 @@ void kvfree_call_rcu(struct rcu_head *head, rcu_callback_t func)
{
unsigned long flags;
struct kfree_rcu_cpu *krcp;
bool success;
void *ptr;

if (head) {
ptr = (void *) head - (unsigned long) func;
} else {
/*
* Please note there is a limitation for the head-less
* variant, that is why there is a clear rule for such
* objects: it can be used from might_sleep() context
* only. For other places please embed an rcu_head to
* your data.
*/
might_sleep();
ptr = (unsigned long *) func;
}

krcp = krc_this_cpu_lock(&flags);
ptr = (void *)head - (unsigned long)func;

// Queue the object but don't yet schedule the batch.
if (debug_rcu_head_queue(ptr)) {
// Probable double kfree_rcu(), just leak.
WARN_ONCE(1, "%s(): Double-freed call. rcu_head %p\n",
__func__, head);

// Mark as success and leave.
success = true;
goto unlock_return;
}

/*
* Under high memory pressure GFP_NOWAIT can fail,
* in that case the emergency path is maintained.
*/
if (unlikely(!kvfree_call_rcu_add_ptr_to_bulk(krcp, ptr))) {
success = kvfree_call_rcu_add_ptr_to_bulk(krcp, ptr);
if (!success) {
if (head == NULL)
// Inline if kvfree_rcu(one_arg) call.
goto unlock_return;

head->func = func;
head->next = krcp->head;
krcp->head = head;
success = true;
}

WRITE_ONCE(krcp->count, krcp->count + 1);
Expand All @@ -3387,6 +3417,17 @@ void kvfree_call_rcu(struct rcu_head *head, rcu_callback_t func)

unlock_return:
krc_this_cpu_unlock(krcp, flags);

/*
* Inline kvfree() after synchronize_rcu(). We can do
* it from might_sleep() context only, so the current
* CPU can pass the QS state.
*/
if (!success) {
debug_rcu_head_unqueue((struct rcu_head *) ptr);
synchronize_rcu();
kvfree(ptr);
}
}
EXPORT_SYMBOL_GPL(kvfree_call_rcu);

Expand Down

0 comments on commit 3042f83

Please sign in to comment.