Skip to content

Commit

Permalink
mm: mmu_gather: use tlb->end != 0 only for TLB invalidation
Browse files Browse the repository at this point in the history
When batching up address ranges for TLB invalidation, we check tlb->end
!= 0 to indicate that some pages have actually been unmapped.

As of commit f045bbb ("mmu_gather: fix over-eager
tlb_flush_mmu_free() calling"), we use the same check for freeing these
pages in order to avoid a performance regression where we call
free_pages_and_swap_cache even when no pages are actually queued up.

Unfortunately, the range could have been reset (tlb->end = 0) by
tlb_end_vma, which has been shown to cause memory leaks on arm64.
Furthermore, investigation into these leaks revealed that the fullmm
case on task exit no longer invalidates the TLB, by virtue of tlb->end
 == 0 (in 3.18, need_flush would have been set).

This patch resolves the problem by reverting commit f045bbb, using
instead tlb->local.nr as the predicate for page freeing in
tlb_flush_mmu_free and ensuring that tlb->end is initialised to a
non-zero value in the fullmm case.

Tested-by: Mark Langsdorf <mlangsdo@redhat.com>
Tested-by: Dave Hansen <dave@sr71.net>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
  • Loading branch information
Will Deacon authored and Linus Torvalds committed Jan 13, 2015
1 parent eaa27f3 commit 721c21c
Show file tree
Hide file tree
Showing 2 changed files with 10 additions and 6 deletions.
8 changes: 6 additions & 2 deletions include/asm-generic/tlb.h
Original file line number Diff line number Diff line change
Expand Up @@ -136,8 +136,12 @@ static inline void __tlb_adjust_range(struct mmu_gather *tlb,

static inline void __tlb_reset_range(struct mmu_gather *tlb)
{
tlb->start = TASK_SIZE;
tlb->end = 0;
if (tlb->fullmm) {
tlb->start = tlb->end = ~0;
} else {
tlb->start = TASK_SIZE;
tlb->end = 0;
}
}

/*
Expand Down
8 changes: 4 additions & 4 deletions mm/memory.c
Original file line number Diff line number Diff line change
Expand Up @@ -235,6 +235,9 @@ void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned long

static void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb)
{
if (!tlb->end)
return;

tlb_flush(tlb);
mmu_notifier_invalidate_range(tlb->mm, tlb->start, tlb->end);
#ifdef CONFIG_HAVE_RCU_TABLE_FREE
Expand All @@ -247,7 +250,7 @@ static void tlb_flush_mmu_free(struct mmu_gather *tlb)
{
struct mmu_gather_batch *batch;

for (batch = &tlb->local; batch; batch = batch->next) {
for (batch = &tlb->local; batch && batch->nr; batch = batch->next) {
free_pages_and_swap_cache(batch->pages, batch->nr);
batch->nr = 0;
}
Expand All @@ -256,9 +259,6 @@ static void tlb_flush_mmu_free(struct mmu_gather *tlb)

void tlb_flush_mmu(struct mmu_gather *tlb)
{
if (!tlb->end)
return;

tlb_flush_mmu_tlbonly(tlb);
tlb_flush_mmu_free(tlb);
}
Expand Down

0 comments on commit 721c21c

Please sign in to comment.