Skip to content

Commit

Permalink
rhashtable: better high order allocation attempts
Browse files Browse the repository at this point in the history
When trying to allocate future tables via bucket_table_alloc(), it seems
overkill on large table shifts that we probe for kzalloc() unconditionally
first, as it's likely to fail.

Only probe with kzalloc() for more reasonable table sizes and use vzalloc()
either as a fallback on failure or directly in case of large table sizes.

Fixes: 7e1e776 ("lib: Resizable, Scalable, Concurrent Hash Table")
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Thomas Graf <tgraf@suug.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
  • Loading branch information
Daniel Borkmann authored and David S. Miller committed Feb 20, 2015
1 parent 342100d commit eb6d1ab
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions lib/rhashtable.c
Original file line number Diff line number Diff line change
Expand Up @@ -217,15 +217,15 @@ static void bucket_table_free(const struct bucket_table *tbl)
static struct bucket_table *bucket_table_alloc(struct rhashtable *ht,
size_t nbuckets)
{
struct bucket_table *tbl;
struct bucket_table *tbl = NULL;
size_t size;
int i;

size = sizeof(*tbl) + nbuckets * sizeof(tbl->buckets[0]);
tbl = kzalloc(size, GFP_KERNEL | __GFP_NOWARN);
if (size <= (PAGE_SIZE << PAGE_ALLOC_COSTLY_ORDER))
tbl = kzalloc(size, GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY);
if (tbl == NULL)
tbl = vzalloc(size);

if (tbl == NULL)
return NULL;

Expand Down

0 comments on commit eb6d1ab

Please sign in to comment.