Skip to content

Commit

Permalink
nvmet-rdma: Don't use the inline buffer in order to avoid allocation …
Browse files Browse the repository at this point in the history
…for small reads

Under extreme conditions this might cause data corruptions. By doing that
we we repost the buffer and then post this buffer for the device to send.
If we happen to use shared receive queues the device might write to the
buffer before it sends it (there is no ordering between send and recv
queues). Without SRQs we probably won't get that if the host doesn't
mis-behave and send more than we allowed it, but relying on that is not
really a good idea.

Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Christoph Hellwig <hch@lst.de>
  • Loading branch information
Sagi Grimberg committed Aug 4, 2016
1 parent d8f7750 commit 40e64e0
Showing 1 changed file with 4 additions and 9 deletions.
13 changes: 4 additions & 9 deletions drivers/nvme/target/rdma.c
Original file line number Diff line number Diff line change
Expand Up @@ -616,15 +616,10 @@ static u16 nvmet_rdma_map_sgl_keyed(struct nvmet_rdma_rsp *rsp,
if (!len)
return 0;

/* use the already allocated data buffer if possible */
if (len <= NVMET_RDMA_INLINE_DATA_SIZE && rsp->queue->host_qid) {
nvmet_rdma_use_inline_sg(rsp, len, 0);
} else {
status = nvmet_rdma_alloc_sgl(&rsp->req.sg, &rsp->req.sg_cnt,
len);
if (status)
return status;
}
status = nvmet_rdma_alloc_sgl(&rsp->req.sg, &rsp->req.sg_cnt,
len);
if (status)
return status;

ret = rdma_rw_ctx_init(&rsp->rw, cm_id->qp, cm_id->port_num,
rsp->req.sg, rsp->req.sg_cnt, 0, addr, key,
Expand Down

0 comments on commit 40e64e0

Please sign in to comment.