Skip to content

Commit

Permalink
mptcp: do not queue excessive data on subflows
Browse files Browse the repository at this point in the history
The current packet scheduler can enqueue up to sndbuf
data on each subflow. If the send buffer is large and
the subflows are not symmetric, this could lead to
suboptimal aggregate bandwidth utilization.

Limit the amount of queued data to the maximum send
window.

Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
  • Loading branch information
Paolo Abeni authored and Jakub Kicinski committed Jan 23, 2021
1 parent 5cf92bb commit ec369c3
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions net/mptcp/protocol.c
Original file line number Diff line number Diff line change
Expand Up @@ -1389,7 +1389,7 @@ static struct sock *mptcp_subflow_get_send(struct mptcp_sock *msk)
continue;

nr_active += !subflow->backup;
if (!sk_stream_memory_free(subflow->tcp_sock))
if (!sk_stream_memory_free(subflow->tcp_sock) || !tcp_sk(ssk)->snd_wnd)
continue;

pace = READ_ONCE(ssk->sk_pacing_rate);
Expand All @@ -1415,7 +1415,7 @@ static struct sock *mptcp_subflow_get_send(struct mptcp_sock *msk)
if (send_info[0].ssk) {
msk->last_snd = send_info[0].ssk;
msk->snd_burst = min_t(int, MPTCP_SEND_BURST_SIZE,
sk_stream_wspace(msk->last_snd));
tcp_sk(msk->last_snd)->snd_wnd);
return msk->last_snd;
}

Expand Down

0 comments on commit ec369c3

Please sign in to comment.