Skip to content

Commit

Permalink
mbcache: Avoid nesting of cache->c_list_lock under bit locks
Browse files Browse the repository at this point in the history
commit 5fc4cbd upstream.

Commit 307af6c ("mbcache: automatically delete entries from cache
on freeing") started nesting cache->c_list_lock under the bit locks
protecting hash buckets of the mbcache hash table in
mb_cache_entry_create(). This causes problems for real-time kernels
because there spinlocks are sleeping locks while bitlocks stay atomic.
Luckily the nesting is easy to avoid by holding entry reference until
the entry is added to the LRU list. This makes sure we cannot race with
entry deletion.

Cc: stable@kernel.org
Fixes: 307af6c ("mbcache: automatically delete entries from cache on freeing")
Reported-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Jan Kara <jack@suse.cz>
Link: https://lore.kernel.org/r/20220908091032.10513-1-jack@suse.cz
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  • Loading branch information
Jan Kara authored and Greg Kroah-Hartman committed Jan 12, 2023
1 parent d50d6c1 commit 99c0759
Showing 1 changed file with 10 additions and 7 deletions.
17 changes: 10 additions & 7 deletions fs/mbcache.c
Original file line number Diff line number Diff line change
Expand Up @@ -90,8 +90,14 @@ int mb_cache_entry_create(struct mb_cache *cache, gfp_t mask, u32 key,
return -ENOMEM;

INIT_LIST_HEAD(&entry->e_list);
/* Initial hash reference */
atomic_set(&entry->e_refcnt, 1);
/*
* We create entry with two references. One reference is kept by the
* hash table, the other reference is used to protect us from
* mb_cache_entry_delete_or_get() until the entry is fully setup. This
* avoids nesting of cache->c_list_lock into hash table bit locks which
* is problematic for RT.
*/
atomic_set(&entry->e_refcnt, 2);
entry->e_key = key;
entry->e_value = value;
entry->e_flags = 0;
Expand All @@ -107,15 +113,12 @@ int mb_cache_entry_create(struct mb_cache *cache, gfp_t mask, u32 key,
}
}
hlist_bl_add_head(&entry->e_hash_list, head);
/*
* Add entry to LRU list before it can be found by
* mb_cache_entry_delete() to avoid races
*/
hlist_bl_unlock(head);
spin_lock(&cache->c_list_lock);
list_add_tail(&entry->e_list, &cache->c_list);
cache->c_entry_count++;
spin_unlock(&cache->c_list_lock);
hlist_bl_unlock(head);
mb_cache_entry_put(cache, entry);

return 0;
}
Expand Down

0 comments on commit 99c0759

Please sign in to comment.