Skip to content

Commit

Permalink
rhashtable: Avoid calculating hash again to unlock
Browse files Browse the repository at this point in the history
Caching the lock pointer avoids having to hash on the object
again to unlock the bucket locks.

Signed-off-by: Thomas Graf <tgraf@suug.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
  • Loading branch information
Thomas Graf authored and David S. Miller committed Mar 16, 2015
1 parent 9f1ab18 commit 617011e
Showing 1 changed file with 5 additions and 6 deletions.
11 changes: 5 additions & 6 deletions lib/rhashtable.c
Original file line number Diff line number Diff line change
Expand Up @@ -384,14 +384,16 @@ static bool __rhashtable_insert(struct rhashtable *ht, struct rhash_head *obj,
struct rhash_head *head;
bool no_resize_running;
unsigned hash;
spinlock_t *old_lock;
bool success = true;

rcu_read_lock();

old_tbl = rht_dereference_rcu(ht->tbl, ht);
hash = head_hashfn(ht, old_tbl, obj);
old_lock = bucket_lock(old_tbl, hash);

spin_lock_bh(bucket_lock(old_tbl, hash));
spin_lock_bh(old_lock);

/* Because we have already taken the bucket lock in old_tbl,
* if we find that future_tbl is not yet visible then that
Expand Down Expand Up @@ -428,13 +430,10 @@ static bool __rhashtable_insert(struct rhashtable *ht, struct rhash_head *obj,
schedule_work(&ht->run_work);

exit:
if (tbl != old_tbl) {
hash = head_hashfn(ht, tbl, obj);
if (tbl != old_tbl)
spin_unlock(bucket_lock(tbl, hash));
}

hash = head_hashfn(ht, old_tbl, obj);
spin_unlock_bh(bucket_lock(old_tbl, hash));
spin_unlock_bh(old_lock);

rcu_read_unlock();

Expand Down

0 comments on commit 617011e

Please sign in to comment.