Skip to content

Commit

Permalink
arm64: mm: Fix freeing of the wrong memmap entries with !SPARSEMEM_VM…
Browse files Browse the repository at this point in the history
…EMMAP

The memmap freeing code in free_unused_memmap() computes the end of
each memblock by adding the memblock size onto the base.  However,
if SPARSEMEM is enabled then the value (start) used for the base
may already have been rounded downwards to work out which memmap
entries to free after the previous memblock.

This may cause memmap entries that are in use to get freed.

In general, you're not likely to hit this problem unless there
are at least 2 memblocks and one of them is not aligned to a
sparsemem section boundary.  Note that carve-outs can increase
the number of memblocks by splitting the regions listed in the
device tree.

This problem doesn't occur with SPARSEMEM_VMEMMAP, because the
vmemmap code deals with freeing the unused regions of the memmap
instead of requiring the arch code to do it.

This patch gets the memblock base out of the memblock directly when
computing the block end address to ensure the correct value is used.

Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
  • Loading branch information
Dave P Martin authored and Catalin Marinas committed Jun 17, 2015
1 parent 46b0567 commit b9bcc91
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion arch/arm64/mm/init.c
Original file line number Diff line number Diff line change
Expand Up @@ -262,7 +262,7 @@ static void __init free_unused_memmap(void)
* memmap entries are valid from the bank end aligned to
* MAX_ORDER_NR_PAGES.
*/
prev_end = ALIGN(start + __phys_to_pfn(reg->size),
prev_end = ALIGN(__phys_to_pfn(reg->base + reg->size),
MAX_ORDER_NR_PAGES);
}

Expand Down

0 comments on commit b9bcc91

Please sign in to comment.