Skip to content

Commit

Permalink
iommu: Decouple iommu_map_sg from CPU page size
Browse files Browse the repository at this point in the history
If the IOMMU supports pages smaller than the CPU page size, segments
which lie at offsets within the CPU page may be mapped based on the
finer-grained IOMMU page boundaries. This minimises the amount of
non-buffer memory between the CPU page boundary and the start of the
segment which must be mapped and therefore exposed to the device, and
brings the default iommu_map_sg implementation in line with
iommu_map/unmap with respect to alignment.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
  • Loading branch information
Robin Murphy authored and Joerg Roedel committed Dec 2, 2014
1 parent 0690cbd commit 18f2340
Showing 1 changed file with 15 additions and 5 deletions.
20 changes: 15 additions & 5 deletions drivers/iommu/iommu.c
Original file line number Diff line number Diff line change
Expand Up @@ -1143,14 +1143,24 @@ size_t default_iommu_map_sg(struct iommu_domain *domain, unsigned long iova,
{
struct scatterlist *s;
size_t mapped = 0;
unsigned int i;
unsigned int i, min_pagesz;
int ret;

for_each_sg(sg, s, nents, i) {
phys_addr_t phys = page_to_phys(sg_page(s));
if (unlikely(domain->ops->pgsize_bitmap == 0UL))
return 0;

/* We are mapping on page boundarys, so offset must be 0 */
if (s->offset)
min_pagesz = 1 << __ffs(domain->ops->pgsize_bitmap);

for_each_sg(sg, s, nents, i) {
phys_addr_t phys = page_to_phys(sg_page(s)) + s->offset;

/*
* We are mapping on IOMMU page boundaries, so offset within
* the page must be 0. However, the IOMMU may support pages
* smaller than PAGE_SIZE, so s->offset may still represent
* an offset of that boundary within the CPU page.
*/
if (!IS_ALIGNED(s->offset, min_pagesz))
goto out_err;

ret = iommu_map(domain, iova + mapped, phys, s->length, prot);
Expand Down

0 comments on commit 18f2340

Please sign in to comment.