Skip to content

Commit

Permalink
arm64/lib: copy_page: use consistent prefetch stride
Browse files Browse the repository at this point in the history
The optional prefetch instructions in the copy_page() routine are
inconsistent: at the start of the function, two cachelines are
prefetched beyond the one being loaded in the first iteration, but
in the loop, the prefetch is one more line ahead. This appears to
be unintentional, so let's fix it.

While at it, fix the comment style and white space.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
  • Loading branch information
Ard Biesheuvel authored and Will Deacon committed Jul 25, 2017
1 parent ece4b20 commit 288be97
Showing 1 changed file with 5 additions and 4 deletions.
9 changes: 5 additions & 4 deletions arch/arm64/lib/copy_page.S
Original file line number Diff line number Diff line change
Expand Up @@ -30,9 +30,10 @@
*/
ENTRY(copy_page)
alternative_if ARM64_HAS_NO_HW_PREFETCH
# Prefetch two cache lines ahead.
prfm pldl1strm, [x1, #128]
prfm pldl1strm, [x1, #256]
// Prefetch three cache lines ahead.
prfm pldl1strm, [x1, #128]
prfm pldl1strm, [x1, #256]
prfm pldl1strm, [x1, #384]
alternative_else_nop_endif

ldp x2, x3, [x1]
Expand All @@ -50,7 +51,7 @@ alternative_else_nop_endif
subs x18, x18, #128

alternative_if ARM64_HAS_NO_HW_PREFETCH
prfm pldl1strm, [x1, #384]
prfm pldl1strm, [x1, #384]
alternative_else_nop_endif

stnp x2, x3, [x0]
Expand Down

0 comments on commit 288be97

Please sign in to comment.