Skip to content

Commit

Permalink
nvme: enable batched completions of passthrough IO
Browse files Browse the repository at this point in the history
Now that the normal passthrough end_io path doesn't need the request
anymore, we can kill the explicit blk_mq_free_request() and just pass
back RQ_END_IO_FREE instead. This enables the batched completion from
freeing batches of requests at the time.

This brings passthrough IO performance at least on par with bdev based
O_DIRECT with io_uring. With this and batche allocations, peak performance
goes from 110M IOPS to 122M IOPS. For IRQ based, passthrough is now also
about 10% faster than previously, going from ~61M to ~67M IOPS.

Reviewed-by: Anuj Gupta <anuj20.g@samsung.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Co-developed-by: Stefan Roesch <shr@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
  • Loading branch information
Jens Axboe committed Sep 30, 2022
1 parent c0a7ba7 commit 851eb78
Showing 1 changed file with 1 addition and 2 deletions.
3 changes: 1 addition & 2 deletions drivers/nvme/host/ioctl.c
Original file line number Diff line number Diff line change
Expand Up @@ -430,8 +430,7 @@ static enum rq_end_io_ret nvme_uring_cmd_end_io(struct request *req,
else
io_uring_cmd_complete_in_task(ioucmd, nvme_uring_task_cb);

blk_mq_free_request(req);
return RQ_END_IO_NONE;
return RQ_END_IO_FREE;
}

static enum rq_end_io_ret nvme_uring_cmd_end_io_meta(struct request *req,
Expand Down

0 comments on commit 851eb78

Please sign in to comment.