Skip to content

Commit

Permalink
[ARM] dma: don't touch cache on dma_*_for_cpu()
Browse files Browse the repository at this point in the history
As per the dma_unmap_* calls, we don't touch the cache when a DMA
buffer transitions from device to CPU ownership.  Presently, no
problems have been identified with speculative cache prefetching
which in itself is a new feature in later architectures.  We may
have to revisit the DMA API later for these architectures anyway.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
  • Loading branch information
Russell King authored and Russell King committed Sep 30, 2008
1 parent 0e18b5d commit 309dbba
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 11 deletions.
6 changes: 1 addition & 5 deletions arch/arm/include/asm/dma-mapping.h
Original file line number Diff line number Diff line change
Expand Up @@ -376,11 +376,7 @@ static inline void dma_sync_single_range_for_cpu(struct device *dev,
{
BUG_ON(!valid_dma_direction(dir));

if (!dmabounce_sync_for_cpu(dev, handle, offset, size, dir))
return;

if (!arch_is_coherent())
dma_cache_maint(dma_to_virt(dev, handle) + offset, size, dir);
dmabounce_sync_for_cpu(dev, handle, offset, size, dir);
}

static inline void dma_sync_single_range_for_device(struct device *dev,
Expand Down
8 changes: 2 additions & 6 deletions arch/arm/mm/dma-mapping.c
Original file line number Diff line number Diff line change
Expand Up @@ -585,12 +585,8 @@ void dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg,
int i;

for_each_sg(sg, s, nents, i) {
if (!dmabounce_sync_for_cpu(dev, sg_dma_address(s), 0,
sg_dma_len(s), dir))
continue;

if (!arch_is_coherent())
dma_cache_maint(sg_virt(s), s->length, dir);
dmabounce_sync_for_cpu(dev, sg_dma_address(s), 0,
sg_dma_len(s), dir);
}
}
EXPORT_SYMBOL(dma_sync_sg_for_cpu);
Expand Down

0 comments on commit 309dbba

Please sign in to comment.