Skip to content

Commit

Permalink
vfio/mlx5: Align the page tracking max message size with the device c…
Browse files Browse the repository at this point in the history
…apability

Align the page tracking maximum message size with the device's
capability instead of relying on PAGE_SIZE.

This adjustment resolves a mismatch on systems where PAGE_SIZE is 64K,
but the firmware only supports a maximum message size of 4K.

Now that we rely on the device's capability for max_message_size, we
must account for potential future increases in its value.

Key considerations include:
- Supporting message sizes that exceed a single system page (e.g., an 8K
  message on a 4K system).
- Ensuring the RQ size is adjusted to accommodate at least 4
  WQEs/messages, in line with the device specification.

The above has been addressed as part of the patch.

Fixes: 79c3cf2 ("vfio/mlx5: Init QP based resources for dirty tracking")
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Tested-by: Yingshun Cui <yicui@redhat.com>
Signed-off-by: Yishai Hadas <yishaih@nvidia.com>
Link: https://lore.kernel.org/r/20241205122654.235619-1-yishaih@nvidia.com
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
  • Loading branch information
Yishai Hadas authored and Alex Williamson committed Dec 5, 2024
1 parent 40384c8 commit 9c7c543
Showing 1 changed file with 35 additions and 12 deletions.
47 changes: 35 additions & 12 deletions drivers/vfio/pci/mlx5/cmd.c
Original file line number Diff line number Diff line change
Expand Up @@ -1517,7 +1517,8 @@ int mlx5vf_start_page_tracker(struct vfio_device *vdev,
struct mlx5_vhca_qp *host_qp;
struct mlx5_vhca_qp *fw_qp;
struct mlx5_core_dev *mdev;
u32 max_msg_size = PAGE_SIZE;
u32 log_max_msg_size;
u32 max_msg_size;
u64 rq_size = SZ_2M;
u32 max_recv_wr;
int err;
Expand All @@ -1534,6 +1535,12 @@ int mlx5vf_start_page_tracker(struct vfio_device *vdev,
}

mdev = mvdev->mdev;
log_max_msg_size = MLX5_CAP_ADV_VIRTUALIZATION(mdev, pg_track_log_max_msg_size);
max_msg_size = (1ULL << log_max_msg_size);
/* The RQ must hold at least 4 WQEs/messages for successful QP creation */
if (rq_size < 4 * max_msg_size)
rq_size = 4 * max_msg_size;

memset(tracker, 0, sizeof(*tracker));
tracker->uar = mlx5_get_uars_page(mdev);
if (IS_ERR(tracker->uar)) {
Expand Down Expand Up @@ -1623,25 +1630,41 @@ set_report_output(u32 size, int index, struct mlx5_vhca_qp *qp,
{
u32 entry_size = MLX5_ST_SZ_BYTES(page_track_report_entry);
u32 nent = size / entry_size;
u32 nent_in_page;
u32 nent_to_set;
struct page *page;
u32 page_offset;
u32 page_index;
u32 buf_offset;
void *kaddr;
u64 addr;
u64 *buf;
int i;

if (WARN_ON(index >= qp->recv_buf.npages ||
buf_offset = index * qp->max_msg_size;
if (WARN_ON(buf_offset + size >= qp->recv_buf.npages * PAGE_SIZE ||
(nent > qp->max_msg_size / entry_size)))
return;

page = qp->recv_buf.page_list[index];
buf = kmap_local_page(page);
for (i = 0; i < nent; i++) {
addr = MLX5_GET(page_track_report_entry, buf + i,
dirty_address_low);
addr |= (u64)MLX5_GET(page_track_report_entry, buf + i,
dirty_address_high) << 32;
iova_bitmap_set(dirty, addr, qp->tracked_page_size);
}
kunmap_local(buf);
do {
page_index = buf_offset / PAGE_SIZE;
page_offset = buf_offset % PAGE_SIZE;
nent_in_page = (PAGE_SIZE - page_offset) / entry_size;
page = qp->recv_buf.page_list[page_index];
kaddr = kmap_local_page(page);
buf = kaddr + page_offset;
nent_to_set = min(nent, nent_in_page);
for (i = 0; i < nent_to_set; i++) {
addr = MLX5_GET(page_track_report_entry, buf + i,
dirty_address_low);
addr |= (u64)MLX5_GET(page_track_report_entry, buf + i,
dirty_address_high) << 32;
iova_bitmap_set(dirty, addr, qp->tracked_page_size);
}
kunmap_local(kaddr);
buf_offset += (nent_to_set * entry_size);
nent -= nent_to_set;
} while (nent);
}

static void
Expand Down

0 comments on commit 9c7c543

Please sign in to comment.