Skip to content

Commit

Permalink
io_uring: spin in iopoll() only when reqs are in a single queue
Browse files Browse the repository at this point in the history
We currently spin in iopoll() when requests to be iopolled are for
same file(device), while one device may have multiple hardware queues.
given an example:

hw_queue_0     |    hw_queue_1
req(30us)           req(10us)

If we first spin on iopolling for the hw_queue_0. the avg latency would
be (30us + 30us) / 2 = 30us. While if we do round robin, the avg
latency would be (30us + 10us) / 2 = 20us since we reap the request in
hw_queue_1 in time. So it's better to do spinning only when requests
are in same hardware queue.

Signed-off-by: Hao Xu <haoxu@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
  • Loading branch information
Hao Xu authored and Jens Axboe committed Jun 30, 2021
1 parent 99ebe4e commit 915b3dd
Showing 1 changed file with 14 additions and 6 deletions.
20 changes: 14 additions & 6 deletions fs/io_uring.c
Original file line number Diff line number Diff line change
Expand Up @@ -434,7 +434,7 @@ struct io_ring_ctx {
struct list_head iopoll_list;
struct hlist_head *cancel_hash;
unsigned cancel_hash_bits;
bool poll_multi_file;
bool poll_multi_queue;
} ____cacheline_aligned_in_smp;

struct io_restriction restrictions;
Expand Down Expand Up @@ -2314,7 +2314,7 @@ static int io_do_iopoll(struct io_ring_ctx *ctx, unsigned int *nr_events,
* Only spin for completions if we don't have multiple devices hanging
* off our complete list, and we're under the requested amount.
*/
spin = !ctx->poll_multi_file && *nr_events < min;
spin = !ctx->poll_multi_queue && *nr_events < min;

ret = 0;
list_for_each_entry_safe(req, tmp, &ctx->iopoll_list, inflight_entry) {
Expand Down Expand Up @@ -2553,14 +2553,22 @@ static void io_iopoll_req_issued(struct io_kiocb *req)
* different devices.
*/
if (list_empty(&ctx->iopoll_list)) {
ctx->poll_multi_file = false;
} else if (!ctx->poll_multi_file) {
ctx->poll_multi_queue = false;
} else if (!ctx->poll_multi_queue) {
struct io_kiocb *list_req;
unsigned int queue_num0, queue_num1;

list_req = list_first_entry(&ctx->iopoll_list, struct io_kiocb,
inflight_entry);
if (list_req->file != req->file)
ctx->poll_multi_file = true;

if (list_req->file != req->file) {
ctx->poll_multi_queue = true;
} else {
queue_num0 = blk_qc_t_to_queue_num(list_req->rw.kiocb.ki_cookie);
queue_num1 = blk_qc_t_to_queue_num(req->rw.kiocb.ki_cookie);
if (queue_num0 != queue_num1)
ctx->poll_multi_queue = true;
}
}

/*
Expand Down

0 comments on commit 915b3dd

Please sign in to comment.