Skip to content

Commit

Permalink
mm: page_alloc: fix zone allocation fairness on UP
Browse files Browse the repository at this point in the history
The zone allocation batches can easily underflow due to higher-order
allocations or spills to remote nodes.  On SMP that's fine, because
underflows are expected from concurrency and dealt with by returning 0.
But on UP, zone_page_state will just return a wrapped unsigned long,
which will get past the <= 0 check and then consider the zone eligible
until its watermarks are hit.

Commit 3a02576 ("mm: page_alloc: spill to remote nodes before
waking kswapd") already made the counter-resetting use
atomic_long_read() to accomodate underflows from remote spills, but it
didn't go all the way with it.

Make it clear that these batches are expected to go negative regardless
of concurrency, and use atomic_long_read() everywhere.

Fixes: 81c0a2b ("mm: page_alloc: fair zone allocator policy")
Reported-by: Vlastimil Babka <vbabka@suse.cz>
Reported-by: Leon Romanovsky <leon@leon.nu>
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Mel Gorman <mgorman@suse.de>
Cc: <stable@vger.kernel.org>	[3.12+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
  • Loading branch information
Johannes Weiner authored and Linus Torvalds committed Oct 2, 2014
1 parent 6c72e35 commit abe5f97
Showing 1 changed file with 3 additions and 4 deletions.
7 changes: 3 additions & 4 deletions mm/page_alloc.c
Original file line number Diff line number Diff line change
Expand Up @@ -1612,7 +1612,7 @@ struct page *buffered_rmqueue(struct zone *preferred_zone,
}

__mod_zone_page_state(zone, NR_ALLOC_BATCH, -(1 << order));
if (zone_page_state(zone, NR_ALLOC_BATCH) == 0 &&
if (atomic_long_read(&zone->vm_stat[NR_ALLOC_BATCH]) <= 0 &&
!zone_is_fair_depleted(zone))
zone_set_flag(zone, ZONE_FAIR_DEPLETED);

Expand Down Expand Up @@ -5701,9 +5701,8 @@ static void __setup_per_zone_wmarks(void)
zone->watermark[WMARK_HIGH] = min_wmark_pages(zone) + (tmp >> 1);

__mod_zone_page_state(zone, NR_ALLOC_BATCH,
high_wmark_pages(zone) -
low_wmark_pages(zone) -
zone_page_state(zone, NR_ALLOC_BATCH));
high_wmark_pages(zone) - low_wmark_pages(zone) -
atomic_long_read(&zone->vm_stat[NR_ALLOC_BATCH]));

setup_zone_migrate_reserve(zone);
spin_unlock_irqrestore(&zone->lock, flags);
Expand Down

0 comments on commit abe5f97

Please sign in to comment.