Skip to content

Commit

Permalink
netpoll: Optimize skb refilling on critical path
Browse files Browse the repository at this point in the history
netpoll tries to refill the skb queue on every packet send, independently
if packets are being consumed from the pool or not. This was
particularly problematic while being called from printk(), where the
operation would be done while holding the console lock.

Introduce a more intelligent approach to skb queue management. Instead
of constantly attempting to refill the queue, the system now defers
refilling to a work queue and only triggers the workqueue when a buffer
is actually dequeued. This change significantly reduces operations with
the lock held.

Add a work_struct to the netpoll structure for asynchronous refilling,
updating find_skb() to schedule refill work only when necessary (skb is
dequeued).

These changes have demonstrated a 15% reduction in time spent during
netpoll_send_msg operations, especially when no SKBs are not consumed
from consumed from pool.

When SKBs are being dequeued, the improvement is even better, around
70%, mainly because refilling the SKB pool is now happening outside of
the critical patch (with console_owner lock held).

Signed-off-by: Breno Leitao <leitao@debian.org>
Reviewed-by: Simon Horman <horms@kernel.org>
Link: https://patch.msgid.link/20250304-netpoll_refill_v2-v1-1-06e2916a4642@debian.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
  • Loading branch information
Breno Leitao authored and Jakub Kicinski committed Mar 8, 2025
1 parent fca9fe1 commit 248f657
Show file tree
Hide file tree
Showing 2 changed files with 14 additions and 2 deletions.
1 change: 1 addition & 0 deletions include/linux/netpoll.h
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@ struct netpoll {
u16 local_port, remote_port;
u8 remote_mac[ETH_ALEN];
struct sk_buff_head skb_pool;
struct work_struct refill_wq;
};

struct netpoll_info {
Expand Down
15 changes: 13 additions & 2 deletions net/core/netpoll.c
Original file line number Diff line number Diff line change
Expand Up @@ -284,12 +284,13 @@ static struct sk_buff *find_skb(struct netpoll *np, int len, int reserve)
struct sk_buff *skb;

zap_completion_queue();
refill_skbs(np);
repeat:

skb = alloc_skb(len, GFP_ATOMIC);
if (!skb)
if (!skb) {
skb = skb_dequeue(&np->skb_pool);
schedule_work(&np->refill_wq);
}

if (!skb) {
if (++count < 10) {
Expand Down Expand Up @@ -535,6 +536,7 @@ static void skb_pool_flush(struct netpoll *np)
{
struct sk_buff_head *skb_pool;

cancel_work_sync(&np->refill_wq);
skb_pool = &np->skb_pool;
skb_queue_purge_reason(skb_pool, SKB_CONSUMED);
}
Expand Down Expand Up @@ -621,6 +623,14 @@ int netpoll_parse_options(struct netpoll *np, char *opt)
}
EXPORT_SYMBOL(netpoll_parse_options);

static void refill_skbs_work_handler(struct work_struct *work)
{
struct netpoll *np =
container_of(work, struct netpoll, refill_wq);

refill_skbs(np);
}

int __netpoll_setup(struct netpoll *np, struct net_device *ndev)
{
struct netpoll_info *npinfo;
Expand Down Expand Up @@ -666,6 +676,7 @@ int __netpoll_setup(struct netpoll *np, struct net_device *ndev)

/* fill up the skb queue */
refill_skbs(np);
INIT_WORK(&np->refill_wq, refill_skbs_work_handler);

/* last thing to do is link it to the net device structure */
rcu_assign_pointer(ndev->npinfo, npinfo);
Expand Down

0 comments on commit 248f657

Please sign in to comment.