Skip to content

Commit

Permalink
swiotlb: fix slot alignment checks
Browse files Browse the repository at this point in the history
Explicit alignment and page alignment are used only to calculate
the stride, not when checking actual slot physical address.

Originally, only page alignment was implemented, and that worked,
because the whole SWIOTLB is allocated on a page boundary, so
aligning the start index was sufficient to ensure a page-aligned
slot.

When commit 1f221a0 ("swiotlb: respect min_align_mask") added
support for min_align_mask, the index could be incremented in the
search loop, potentially finding an unaligned slot if minimum device
alignment is between IO_TLB_SIZE and PAGE_SIZE.  The bug could go
unnoticed, because the slot size is 2 KiB, and the most common page
size is 4 KiB, so there is no alignment value in between.

IIUC the intention has been to find a slot that conforms to all
alignment constraints: device minimum alignment, an explicit
alignment (given as function parameter) and optionally page
alignment (if allocation size is >= PAGE_SIZE). The most
restrictive mask can be trivially computed with logical AND. The
rest can stay.

Fixes: 1f221a0 ("swiotlb: respect min_align_mask")
Fixes: e81e99b ("swiotlb: Support aligned swiotlb buffers")
Signed-off-by: Petr Tesarik <petr.tesarik.ext@huawei.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
  • Loading branch information
Petr Tesarik authored and Christoph Hellwig committed Mar 22, 2023
1 parent 39e7d2a commit 0eee5ae
Showing 1 changed file with 10 additions and 6 deletions.
16 changes: 10 additions & 6 deletions kernel/dma/swiotlb.c
Original file line number Diff line number Diff line change
Expand Up @@ -634,22 +634,26 @@ static int swiotlb_do_find_slots(struct device *dev, int area_index,
BUG_ON(!nslots);
BUG_ON(area_index >= mem->nareas);

/*
* For allocations of PAGE_SIZE or larger only look for page aligned
* allocations.
*/
if (alloc_size >= PAGE_SIZE)
iotlb_align_mask &= PAGE_MASK;
iotlb_align_mask &= alloc_align_mask;

/*
* For mappings with an alignment requirement don't bother looping to
* unaligned slots once we found an aligned one. For allocations of
* PAGE_SIZE or larger only look for page aligned allocations.
* unaligned slots once we found an aligned one.
*/
stride = (iotlb_align_mask >> IO_TLB_SHIFT) + 1;
if (alloc_size >= PAGE_SIZE)
stride = max(stride, stride << (PAGE_SHIFT - IO_TLB_SHIFT));
stride = max(stride, (alloc_align_mask >> IO_TLB_SHIFT) + 1);

spin_lock_irqsave(&area->lock, flags);
if (unlikely(nslots > mem->area_nslabs - area->used))
goto not_found;

slot_base = area_index * mem->area_nslabs;
index = wrap_area_index(mem, ALIGN(area->index, stride));
index = area->index;

for (slots_checked = 0; slots_checked < mem->area_nslabs; ) {
slot_index = slot_base + index;
Expand Down

0 comments on commit 0eee5ae

Please sign in to comment.