Skip to content

Commit

Permalink
mm/hugetlb.c: avoid bogus counter of surplus huge page
Browse files Browse the repository at this point in the history
If we have to hand back the newly allocated huge page to page allocator,
for any reason, the changed counter should be recovered.

This affects only s390 at present.

Signed-off-by: Hillf Danton <dhillf@gmail.com>
Reviewed-by: Michal Hocko <mhocko@suse.cz>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
  • Loading branch information
Hillf Danton authored and Linus Torvalds committed Jan 11, 2012
1 parent 1ebb704 commit ea5768c
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion mm/hugetlb.c
Original file line number Diff line number Diff line change
Expand Up @@ -800,7 +800,7 @@ static struct page *alloc_buddy_huge_page(struct hstate *h, int nid)

if (page && arch_prepare_hugepage(page)) {
__free_pages(page, huge_page_order(h));
return NULL;
page = NULL;
}

spin_lock(&hugetlb_lock);
Expand Down

0 comments on commit ea5768c

Please sign in to comment.