Skip to content

Commit

Permalink
mm: vmscan: fix endless loop in kswapd balancing
Browse files Browse the repository at this point in the history
commit 60cefed upstream.

Kswapd does not in all places have the same criteria for a balanced
zone.  Zones are only being reclaimed when their high watermark is
breached, but compaction checks loop over the zonelist again when the
zone does not meet the low watermark plus two times the size of the
allocation.  This gets kswapd stuck in an endless loop over a small
zone, like the DMA zone, where the high watermark is smaller than the
compaction requirement.

Add a function, zone_balanced(), that checks the watermark, and, for
higher order allocations, if compaction has enough free memory.  Then
use it uniformly to check for balanced zones.

This makes sure that when the compaction watermark is not met, at least
reclaim happens and progress is made - or the zone is declared
unreclaimable at some point and skipped entirely.

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Reported-by: George Spelvin <linux@horizon.com>
Reported-by: Johannes Hirte <johannes.hirte@fem.tu-ilmenau.de>
Reported-by: Tomas Racek <tracek@redhat.com>
Tested-by: Johannes Hirte <johannes.hirte@fem.tu-ilmenau.de>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
[bwh: Backported to 3.2: adjust context]
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
  • Loading branch information
Johannes Weiner authored and Ben Hutchings committed Dec 6, 2012
1 parent d39c325 commit 39d18dc
Showing 1 changed file with 18 additions and 9 deletions.
27 changes: 18 additions & 9 deletions mm/vmscan.c
Original file line number Diff line number Diff line change
Expand Up @@ -2492,6 +2492,19 @@ unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *mem_cont,
}
#endif

static bool zone_balanced(struct zone *zone, int order,
unsigned long balance_gap, int classzone_idx)
{
if (!zone_watermark_ok_safe(zone, order, high_wmark_pages(zone) +
balance_gap, classzone_idx, 0))
return false;

if (COMPACTION_BUILD && order && !compaction_suitable(zone, order))
return false;

return true;
}

/*
* pgdat_balanced is used when checking if a node is balanced for high-order
* allocations. Only zones that meet watermarks and are in a zone allowed
Expand Down Expand Up @@ -2551,8 +2564,7 @@ static bool sleeping_prematurely(pg_data_t *pgdat, int order, long remaining,
continue;
}

if (!zone_watermark_ok_safe(zone, order, high_wmark_pages(zone),
i, 0))
if (!zone_balanced(zone, order, 0, i))
all_zones_ok = false;
else
balanced += zone->present_pages;
Expand Down Expand Up @@ -2655,8 +2667,7 @@ static unsigned long balance_pgdat(pg_data_t *pgdat, int order,
shrink_active_list(SWAP_CLUSTER_MAX, zone,
&sc, priority, 0);

if (!zone_watermark_ok_safe(zone, order,
high_wmark_pages(zone), 0, 0)) {
if (!zone_balanced(zone, order, 0, 0)) {
end_zone = i;
break;
} else {
Expand Down Expand Up @@ -2717,9 +2728,8 @@ static unsigned long balance_pgdat(pg_data_t *pgdat, int order,
(zone->present_pages +
KSWAPD_ZONE_BALANCE_GAP_RATIO-1) /
KSWAPD_ZONE_BALANCE_GAP_RATIO);
if (!zone_watermark_ok_safe(zone, order,
high_wmark_pages(zone) + balance_gap,
end_zone, 0)) {
if (!zone_balanced(zone, order,
balance_gap, end_zone)) {
shrink_zone(priority, zone, &sc);

reclaim_state->reclaimed_slab = 0;
Expand All @@ -2746,8 +2756,7 @@ static unsigned long balance_pgdat(pg_data_t *pgdat, int order,
continue;
}

if (!zone_watermark_ok_safe(zone, order,
high_wmark_pages(zone), end_zone, 0)) {
if (!zone_balanced(zone, order, 0, end_zone)) {
all_zones_ok = 0;
/*
* We are still under min water mark. This
Expand Down

0 comments on commit 39d18dc

Please sign in to comment.