Skip to content

Commit

Permalink
mm,hugetlb: drop clearing of flag from prep_new_huge_page
Browse files Browse the repository at this point in the history
Pages allocated via the page allocator or CMA get its private field
cleared by means of post_alloc_hook().

Pages allocated during boot, that is directly from the memblock
allocator, get cleared by paging_init()-> ..  ->memmap_init_zone-> ..
->__init_single_page() before any memblock allocation.

Based on this ground, let us remove the clearing of the flag from
prep_new_huge_page() as it is not needed.  This was a leftover from
commit 6c03714 ("hugetlb: convert PageHugeFreed to HPageFreed
flag").

Previously the explicit clearing was necessary because compound
allocations do not get this initialization (see prep_compound_page).

Link: https://lkml.kernel.org/r/20210419075413.1064-4-osalvador@suse.de
Signed-off-by: Oscar Salvador <osalvador@suse.de>
Acked-by: Michal Hocko <mhocko@suse.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Muchun Song <songmuchun@bytedance.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
  • Loading branch information
Oscar Salvador authored and Linus Torvalds committed May 5, 2021
1 parent c2ad7a1 commit 9f27b34
Showing 1 changed file with 0 additions and 1 deletion.
1 change: 0 additions & 1 deletion mm/hugetlb.c
Original file line number Diff line number Diff line change
Expand Up @@ -1494,7 +1494,6 @@ static void prep_new_huge_page(struct hstate *h, struct page *page, int nid)
spin_lock_irq(&hugetlb_lock);
h->nr_huge_pages++;
h->nr_huge_pages_node[nid]++;
ClearHPageFreed(page);
spin_unlock_irq(&hugetlb_lock);
}

Expand Down

0 comments on commit 9f27b34

Please sign in to comment.