Skip to content

Commit

Permalink
arm64: spinlock: Fix theoretical trylock() A-B-A with LSE atomics
Browse files Browse the repository at this point in the history
If the spinlock "next" ticket wraps around between the initial LDR
and the cmpxchg in the LSE version of spin_trylock, then we can erroneously
think that we have successfuly acquired the lock because we only check
whether the next ticket return by the cmpxchg is equal to the owner ticket
in our updated lock word.

This patch fixes the issue by performing a full 32-bit check of the lock
word when trying to determine whether or not the CASA instruction updated
memory.

Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
  • Loading branch information
Will Deacon authored and Catalin Marinas committed Feb 6, 2018
1 parent ec89ab5 commit 202fb4e
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions arch/arm64/include/asm/spinlock.h
Original file line number Diff line number Diff line change
Expand Up @@ -87,8 +87,8 @@ static inline int arch_spin_trylock(arch_spinlock_t *lock)
" cbnz %w1, 1f\n"
" add %w1, %w0, %3\n"
" casa %w0, %w1, %2\n"
" and %w1, %w1, #0xffff\n"
" eor %w1, %w1, %w0, lsr #16\n"
" sub %w1, %w1, %3\n"
" eor %w1, %w1, %w0\n"
"1:")
: "=&r" (lockval), "=&r" (tmp), "+Q" (*lock)
: "I" (1 << TICKET_SHIFT)
Expand Down

0 comments on commit 202fb4e

Please sign in to comment.