Skip to content

Commit

Permalink
arm64: ensure completion of TLB invalidatation
Browse files Browse the repository at this point in the history
Currently there is no dsb between the tlbi in __cpu_setup and the write
to SCTLR_EL1 which enables the MMU in __turn_mmu_on. This means that the
TLB invalidation is not guaranteed to have completed at the point
address translation is enabled, leading to a number of possible issues
including incorrect translations and TLB conflict faults.

This patch moves the tlbi in __cpu_setup above an existing dsb used to
synchronise I-cache invalidation, ensuring that the TLBs have been
invalidated at the point the MMU is enabled.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
  • Loading branch information
Mark Rutland authored and Catalin Marinas committed Dec 6, 2013
1 parent dc1ccc4 commit 3cea71b
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion arch/arm64/mm/proc.S
Original file line number Diff line number Diff line change
Expand Up @@ -111,12 +111,12 @@ ENTRY(__cpu_setup)
bl __flush_dcache_all
mov lr, x28
ic iallu // I+BTB cache invalidate
tlbi vmalle1is // invalidate I + D TLBs
dsb sy

mov x0, #3 << 20
msr cpacr_el1, x0 // Enable FP/ASIMD
msr mdscr_el1, xzr // Reset mdscr_el1
tlbi vmalle1is // invalidate I + D TLBs
/*
* Memory region attributes for LPAE:
*
Expand Down

0 comments on commit 3cea71b

Please sign in to comment.