Skip to content

Commit

Permalink
arm64: Fix DMA range invalidation for cache line unaligned buffers
Browse files Browse the repository at this point in the history
If the buffer needing cache invalidation for inbound DMA does start or
end on a cache line aligned address, we need to use the non-destructive
clean&invalidate operation. This issue was introduced by commit
7363590 (arm64: Implement coherent DMA API based on swiotlb).

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Jon Medhurst (Tixy) <tixy@linaro.org>
  • Loading branch information
Catalin Marinas committed Apr 8, 2014
1 parent d253b44 commit ebf81a9
Showing 1 changed file with 11 additions and 4 deletions.
15 changes: 11 additions & 4 deletions arch/arm64/mm/cache.S
Original file line number Diff line number Diff line change
Expand Up @@ -183,12 +183,19 @@ ENTRY(__inval_cache_range)
__dma_inv_range:
dcache_line_size x2, x3
sub x3, x2, #1
bic x0, x0, x3
tst x1, x3 // end cache line aligned?
bic x1, x1, x3
1: dc ivac, x0 // invalidate D / U line
add x0, x0, x2
b.eq 1f
dc civac, x1 // clean & invalidate D / U line
1: tst x0, x3 // start cache line aligned?
bic x0, x0, x3
b.eq 2f
dc civac, x0 // clean & invalidate D / U line
b 3f
2: dc ivac, x0 // invalidate D / U line
3: add x0, x0, x2
cmp x0, x1
b.lo 1b
b.lo 2b
dsb sy
ret
ENDPROC(__inval_cache_range)
Expand Down

0 comments on commit ebf81a9

Please sign in to comment.