Skip to content

Commit

Permalink
SLUB: clean up krealloc
Browse files Browse the repository at this point in the history
We really do not need all this gaga there.

ksize gives us all the information we need to figure out if the object can
cope with the new size.

Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
  • Loading branch information
Christoph Lameter authored and Linus Torvalds committed May 9, 2007
1 parent abcd08a commit 1f99a28
Showing 1 changed file with 4 additions and 11 deletions.
15 changes: 4 additions & 11 deletions mm/slub.c
Original file line number Diff line number Diff line change
Expand Up @@ -2199,9 +2199,8 @@ EXPORT_SYMBOL(kmem_cache_shrink);
*/
void *krealloc(const void *p, size_t new_size, gfp_t flags)
{
struct kmem_cache *new_cache;
void *ret;
struct page *page;
size_t ks;

if (unlikely(!p))
return kmalloc(new_size, flags);
Expand All @@ -2211,19 +2210,13 @@ void *krealloc(const void *p, size_t new_size, gfp_t flags)
return NULL;
}

page = virt_to_head_page(p);

new_cache = get_slab(new_size, flags);

/*
* If new size fits in the current cache, bail out.
*/
if (likely(page->slab == new_cache))
ks = ksize(p);
if (ks >= new_size)
return (void *)p;

ret = kmalloc(new_size, flags);
if (ret) {
memcpy(ret, p, min(new_size, ksize(p)));
memcpy(ret, p, min(new_size, ks));
kfree(p);
}
return ret;
Expand Down

0 comments on commit 1f99a28

Please sign in to comment.