Skip to content

Commit

Permalink
mm/memory-failure: check the mapcount of the precise page
Browse files Browse the repository at this point in the history
A process may map only some of the pages in a folio, and might be missed
if it maps the poisoned page but not the head page.  Or it might be
unnecessarily hit if it maps the head page, but not the poisoned page.

Link: https://lkml.kernel.org/r/20231218135837.3310403-3-willy@infradead.org
Fixes: 7af446a ("HWPOISON, hugetlb: enable error handling path for hugepage")
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
  • Loading branch information
Matthew Wilcox (Oracle) authored and Andrew Morton committed Dec 20, 2023
1 parent 376907f commit c79c5a0
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions mm/memory-failure.c
Original file line number Diff line number Diff line change
Expand Up @@ -1570,7 +1570,7 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn,
* This check implies we don't kill processes if their pages
* are in the swap cache early. Those are always late kills.
*/
if (!page_mapped(hpage))
if (!page_mapped(p))
return true;

if (PageSwapCache(p)) {
Expand Down Expand Up @@ -1621,10 +1621,10 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn,
try_to_unmap(folio, ttu);
}

unmap_success = !page_mapped(hpage);
unmap_success = !page_mapped(p);
if (!unmap_success)
pr_err("%#lx: failed to unmap page (mapcount=%d)\n",
pfn, page_mapcount(hpage));
pfn, page_mapcount(p));

/*
* try_to_unmap() might put mlocked page in lru cache, so call
Expand Down

0 comments on commit c79c5a0

Please sign in to comment.