Skip to content

Commit

Permalink
net/mlx5: Allocating a pool of MSI-X vectors for SFs
Browse files Browse the repository at this point in the history
SFs (Sub Functions) currently use IRQs from the global IRQ table their
parent Physical Function have. In order to better scale, we need to
allocate more IRQs and share them between different SFs.

Driver will maintain 3 separated irq pools:
1. A pool that serve the PF consumer (PF's netdev, rdma stacks), similar
to what the driver had before this patch. i.e, this pool will share irqs
between rdma and netev, and will keep the irq indexes and allocation
order. The last is important for PF netdev rmap (aRFS).

2. A pool of control IRQs for SFs. The size of this pool is the number
of SFs that can be created divided by SFS_PER_IRQ. This pool will serve
the control path EQs of the SFs.

3. A pool of completion data path IRQs for SFs transport queues. The
size of this pool is:
num_irqs_allocated - pf_pool_size - sf_ctrl_pool_size.
This pool will served netdev and rdma stacks. Moreover, rmap is not
supported on SFs.

Sharing methodology of the SFs pools is explained in the next patch.

Important note: rmap is not supported on SFs because rmap mapping cannot
function correctly for IRQs that are shared for different core/netdev RX
rings.

Signed-off-by: Shay Drory <shayd@nvidia.com>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
  • Loading branch information
Shay Drory authored and Saeed Mahameed committed Jun 15, 2021
1 parent fc63dd2 commit 71e084e
Show file tree
Hide file tree
Showing 3 changed files with 209 additions and 101 deletions.
12 changes: 4 additions & 8 deletions drivers/net/ethernet/mellanox/mlx5/core/eq.c
Original file line number Diff line number Diff line change
Expand Up @@ -471,14 +471,7 @@ static int create_async_eq(struct mlx5_core_dev *dev,
int err;

mutex_lock(&eq_table->lock);
/* Async EQs must share irq index 0 */
if (param->irq_index != 0) {
err = -EINVAL;
goto unlock;
}

err = create_map_eq(dev, eq, param);
unlock:
mutex_unlock(&eq_table->lock);
return err;
}
Expand Down Expand Up @@ -996,8 +989,11 @@ int mlx5_eq_table_create(struct mlx5_core_dev *dev)

eq_table->num_comp_eqs =
min_t(int,
mlx5_irq_get_num_comp(eq_table->irq_table),
mlx5_irq_table_get_num_comp(eq_table->irq_table),
num_eqs - MLX5_MAX_ASYNC_EQS);
if (mlx5_core_is_sf(dev))
eq_table->num_comp_eqs = min_t(int, eq_table->num_comp_eqs,
MLX5_COMP_EQS_PER_SF);

err = create_async_eqs(dev);
if (err) {
Expand Down
6 changes: 5 additions & 1 deletion drivers/net/ethernet/mellanox/mlx5/core/mlx5_irq.h
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,17 @@

#include <linux/mlx5/driver.h>

#define MLX5_COMP_EQS_PER_SF 8

#define MLX5_IRQ_EQ_CTRL (0)

struct mlx5_irq;

int mlx5_irq_table_init(struct mlx5_core_dev *dev);
void mlx5_irq_table_cleanup(struct mlx5_core_dev *dev);
int mlx5_irq_table_create(struct mlx5_core_dev *dev);
void mlx5_irq_table_destroy(struct mlx5_core_dev *dev);
int mlx5_irq_get_num_comp(struct mlx5_irq_table *table);
int mlx5_irq_table_get_num_comp(struct mlx5_irq_table *table);
struct mlx5_irq_table *mlx5_irq_table_get(struct mlx5_core_dev *dev);

int mlx5_set_msix_vec_count(struct mlx5_core_dev *dev, int devfn,
Expand Down
Loading

0 comments on commit 71e084e

Please sign in to comment.