Skip to content

Commit

Permalink
IB: Increase DMA max_segment_size on Mellanox hardware
Browse files Browse the repository at this point in the history
By default, each device is assumed to be able only handle 64 KB chunks
during DMA. By giving the segment size a larger value, the block layer
will coalesce more S/G entries together for SRP, allowing larger
requests with the same sg_tablesize setting.  The block layer is the
only direct user of it, though a few IOMMU drivers reference it as
well for their *_map_sg coalescing code. pci-gart_64 on x86, and a
smattering on on sparc, powerpc, and ia64.

Since other IB protocols could potentially see larger segments with
this, let's check those:

 - iSER is fine, because you limit your maximum request size to 512
   KB, so we'll never overrun the page vector in struct iser_page_vec
   (128 entries currently). It is independent of the DMA segment size,
   and handles multi-page segments already.

 - IPoIB is fine, as it maps each page individually, and doesn't use
   ib_dma_map_sg().

 - RDS appears to do the right thing and has no dependencies on DMA
   segment size, but I don't claim to have done a complete audit.

 - NFSoRDMA and 9p are OK -- they do not use ib_dma_map_sg(), so they
   doesn't care about the coalescing.

 - Lustre's ko2iblnd does not care about coalescing -- it properly
   walks the returned sg list.

This patch ups the value on Mellanox hardware to 1 GB, which matches
reported firmware limits on mlx4.

Signed-off-by: David Dillow <dillowda@ornl.gov>
Signed-off-by: Roland Dreier <roland@purestorage.com>
  • Loading branch information
David Dillow authored and Roland Dreier committed Mar 22, 2011
1 parent 1eba843 commit 7f9e5c4
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 0 deletions.
3 changes: 3 additions & 0 deletions drivers/infiniband/hw/mthca/mthca_main.c
Original file line number Diff line number Diff line change
Expand Up @@ -1043,6 +1043,9 @@ static int __mthca_init_one(struct pci_dev *pdev, int hca_type)
}
}

/* We can handle large RDMA requests, so allow larger segments. */
dma_set_max_seg_size(&pdev->dev, 1024 * 1024 * 1024);

mdev = (struct mthca_dev *) ib_alloc_device(sizeof *mdev);
if (!mdev) {
dev_err(&pdev->dev, "Device struct alloc failed, "
Expand Down
3 changes: 3 additions & 0 deletions drivers/net/mlx4/main.c
Original file line number Diff line number Diff line change
Expand Up @@ -1109,6 +1109,9 @@ static int __mlx4_init_one(struct pci_dev *pdev, const struct pci_device_id *id)
}
}

/* Allow large DMA segments, up to the firmware limit of 1 GB */
dma_set_max_seg_size(&pdev->dev, 1024 * 1024 * 1024);

priv = kzalloc(sizeof *priv, GFP_KERNEL);
if (!priv) {
dev_err(&pdev->dev, "Device struct alloc failed, "
Expand Down

0 comments on commit 7f9e5c4

Please sign in to comment.