Skip to content

Commit

Permalink
slob: reduce list scanning
Browse files Browse the repository at this point in the history
The version of SLOB in -mm always scans its free list from the beginning,
which results in small allocations and free segments clustering at the
beginning of the list over time.  This causes the average search to scan
over a large stretch at the beginning on each allocation.

By starting each page search where the last one left off, we evenly
distribute the allocations and greatly shorten the average search.

Without this patch, kernel compiles on a 1.5G machine take a large amount
of system time for list scanning.  With this patch, compiles are within a
few seconds of performance of a SLAB kernel with no notable change in
system time.

Signed-off-by: Matt Mackall <mpm@selenic.com>
Cc: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
  • Loading branch information
Matt Mackall authored and Linus Torvalds committed Jul 22, 2007
1 parent 41f9dc5 commit d626954
Showing 1 changed file with 16 additions and 5 deletions.
21 changes: 16 additions & 5 deletions mm/slob.c
Original file line number Diff line number Diff line change
Expand Up @@ -293,6 +293,7 @@ static void *slob_page_alloc(struct slob_page *sp, size_t size, int align)
static void *slob_alloc(size_t size, gfp_t gfp, int align, int node)
{
struct slob_page *sp;
struct list_head *prev;
slob_t *b = NULL;
unsigned long flags;

Expand All @@ -307,12 +308,22 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node)
if (node != -1 && page_to_nid(&sp->page) != node)
continue;
#endif
/* Enough room on this page? */
if (sp->units < SLOB_UNITS(size))
continue;

if (sp->units >= SLOB_UNITS(size)) {
b = slob_page_alloc(sp, size, align);
if (b)
break;
}
/* Attempt to alloc */
prev = sp->list.prev;
b = slob_page_alloc(sp, size, align);
if (!b)
continue;

/* Improve fragment distribution and reduce our average
* search time by starting our next search here. (see
* Knuth vol 1, sec 2.5, pg 449) */
if (free_slob_pages.next != prev->next)
list_move_tail(&free_slob_pages, prev->next);
break;
}
spin_unlock_irqrestore(&slob_lock, flags);

Expand Down

0 comments on commit d626954

Please sign in to comment.