Skip to content

Commit

Permalink
Merge branch 'Handle-multiple-received-packets-at-each-stage'
Browse files Browse the repository at this point in the history
Edward Cree says:

====================
Handle multiple received packets at each stage

This patch series adds the capability for the network stack to receive a
 list of packets and process them as a unit, rather than handling each
 packet singly in sequence.  This is done by factoring out the existing
 datapath code at each layer and wrapping it in list handling code.

The motivation for this change is twofold:
* Instruction cache locality.  Currently, running the entire network
  stack receive path on a packet involves more code than will fit in the
  lowest-level icache, meaning that when the next packet is handled, the
  code has to be reloaded from more distant caches.  By handling packets
  in "row-major order", we ensure that the code at each layer is hot for
  most of the list.  (There is a corresponding downside in _data_ cache
  locality, since we are now touching every packet at every layer, but in
  practice there is easily enough room in dcache to hold one cacheline of
  each of the 64 packets in a NAPI poll.)
* Reduction of indirect calls.  Owing to Spectre mitigations, indirect
  function calls are now more expensive than ever; they are also heavily
  used in the network stack's architecture (see [1]).  By replacing 64
  indirect calls to the next-layer per-packet function with a single
  indirect call to the next-layer list function, we can save CPU cycles.

Drivers pass an SKB list to the stack at the end of the NAPI poll; this
 gives a natural batch size (the NAPI poll weight) and avoids waiting at
 the software level for further packets to make a larger batch (which
 would add latency).  It also means that the batch size is automatically
 tuned by the existing interrupt moderation mechanism.
The stack then runs each layer of processing over all the packets in the
 list before proceeding to the next layer.  Where the 'next layer' (or
 the context in which it must run) differs among the packets, the stack
 splits the list; this 'late demux' means that packets which differ only
 in later headers (e.g. same L2/L3 but different L4) can traverse the
 early part of the stack together.
Also, where the next layer is not (yet) list-aware, the stack can revert
 to calling the rest of the stack in a loop; this allows gradual/creeping
 listification, with no 'flag day' patch needed to listify everything.

Patches 1-2 simply place received packets on a list during the event
 processing loop on the sfc EF10 architecture, then call the normal stack
 for each packet singly at the end of the NAPI poll.  (Analogues of patch
 #2 for other NIC drivers should be fairly straightforward.)
Patches 3-9 extend the list processing as far as the IP receive handler.

Patches 1-2 alone give about a 10% improvement in packet rate in the
 baseline test; adding patches 3-9 raises this to around 25%.

Performance measurements were made with NetPerf UDP_STREAM, using 1-byte
 packets and a single core to handle interrupts on the RX side; this was
 in order to measure as simply as possible the packet rate handled by a
 single core.  Figures are in Mbit/s; divide by 8 to obtain Mpps.  The
 setup was tuned for maximum reproducibility, rather than raw performance.
 Full details and more results (both with and without retpolines) from a
 previous version of the patch series are presented in [2].

The baseline test uses four streams, and multiple RXQs all bound to a
 single CPU (the netperf binary is bound to a neighbouring CPU).  These
 tests were run with retpolines.
net-next: 6.91 Mb/s (datum)
 after 9: 8.46 Mb/s (+22.5%)
Note however that these results are not robust; changes in the parameters
 of the test sometimes shrink the gain to single-digit percentages.  For
 instance, when using only a single RXQ, only a 4% gain was seen.

One test variation was the use of software filtering/firewall rules.
 Adding a single iptables rule (UDP port drop on a port range not matching
 the test traffic), thus making the netfilter hook have work to do,
 reduced baseline performance but showed a similar gain from the patches:
net-next: 5.02 Mb/s (datum)
 after 9: 6.78 Mb/s (+35.1%)

Similarly, testing with a set of TC flower filters (kindly supplied by
 Cong Wang) gave the following:
net-next: 6.83 Mb/s (datum)
 after 9: 8.86 Mb/s (+29.7%)

These data suggest that the batching approach remains effective in the
 presence of software switching rules, and perhaps even improves the
 performance of those rules by allowing them and their codepaths to stay
 in cache between packets.

Changes from v3:
* Fixed build error when CONFIG_NETFILTER=n (thanks kbuild).

Changes from v2:
* Used standard list handling (and skb->list) instead of the skb-queue
  functions (that use skb->next, skb->prev).
  - As part of this, changed from a "dequeue, process, enqueue" model to
    using list_for_each_safe, list_del, and (new) list_cut_before.
* Altered __netif_receive_skb_core() changes in patch 6 as per Willem de
  Bruijn's suggestions (separate **ppt_prev from *pt_prev; renaming).
* Removed patches to Generic XDP, since they were producing no benefit.
  I may revisit them later.
* Removed RFC tags.

Changes from v1:
* Rebased across 2 years' net-next movement (surprisingly straightforward).
  - Added Generic XDP handling to netif_receive_skb_list_internal()
  - Dealt with changes to PFMEMALLOC setting APIs
* General cleanup of code and comments.
* Skipped function calls for empty lists at various points in the stack
  (patch #9).
* Added listified Generic XDP handling (patches 10-12), though it doesn't
  seem to help (see above).
* Extended testing to cover software firewalls / netfilter etc.

[1] http://vger.kernel.org/netconf2018_files/DavidMiller_netconf2018.pdf
[2] http://vger.kernel.org/netconf2018_files/EdwardCree_netconf2018.pdf
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
  • Loading branch information
David S. Miller committed Jul 4, 2018
2 parents 2bdea15 + b9f463d commit 2d1b138
Show file tree
Hide file tree
Showing 11 changed files with 360 additions and 16 deletions.
12 changes: 12 additions & 0 deletions drivers/net/ethernet/sfc/efx.c
Original file line number Diff line number Diff line change
Expand Up @@ -264,11 +264,17 @@ static int efx_check_disabled(struct efx_nic *efx)
static int efx_process_channel(struct efx_channel *channel, int budget)
{
struct efx_tx_queue *tx_queue;
struct list_head rx_list;
int spent;

if (unlikely(!channel->enabled))
return 0;

/* Prepare the batch receive list */
EFX_WARN_ON_PARANOID(channel->rx_list != NULL);
INIT_LIST_HEAD(&rx_list);
channel->rx_list = &rx_list;

efx_for_each_channel_tx_queue(tx_queue, channel) {
tx_queue->pkts_compl = 0;
tx_queue->bytes_compl = 0;
Expand All @@ -291,6 +297,10 @@ static int efx_process_channel(struct efx_channel *channel, int budget)
}
}

/* Receive any packets we queued up */
netif_receive_skb_list(channel->rx_list);
channel->rx_list = NULL;

return spent;
}

Expand Down Expand Up @@ -555,6 +565,8 @@ static int efx_probe_channel(struct efx_channel *channel)
goto fail;
}

channel->rx_list = NULL;

return 0;

fail:
Expand Down
3 changes: 3 additions & 0 deletions drivers/net/ethernet/sfc/net_driver.h
Original file line number Diff line number Diff line change
Expand Up @@ -448,6 +448,7 @@ enum efx_sync_events_state {
* __efx_rx_packet(), or zero if there is none
* @rx_pkt_index: Ring index of first buffer for next packet to be delivered
* by __efx_rx_packet(), if @rx_pkt_n_frags != 0
* @rx_list: list of SKBs from current RX, awaiting processing
* @rx_queue: RX queue for this channel
* @tx_queue: TX queues for this channel
* @sync_events_state: Current state of sync events on this channel
Expand Down Expand Up @@ -500,6 +501,8 @@ struct efx_channel {
unsigned int rx_pkt_n_frags;
unsigned int rx_pkt_index;

struct list_head *rx_list;

struct efx_rx_queue rx_queue;
struct efx_tx_queue tx_queue[EFX_TXQ_TYPES];

Expand Down
7 changes: 6 additions & 1 deletion drivers/net/ethernet/sfc/rx.c
Original file line number Diff line number Diff line change
Expand Up @@ -634,7 +634,12 @@ static void efx_rx_deliver(struct efx_channel *channel, u8 *eh,
return;

/* Pass the packet up */
netif_receive_skb(skb);
if (channel->rx_list != NULL)
/* Add to list, will pass up later */
list_add_tail(&skb->list, channel->rx_list);
else
/* No list, so pass it up now */
netif_receive_skb(skb);
}

/* Handle a received packet. Second half: Touches packet payload. */
Expand Down
30 changes: 30 additions & 0 deletions include/linux/list.h
Original file line number Diff line number Diff line change
Expand Up @@ -285,6 +285,36 @@ static inline void list_cut_position(struct list_head *list,
__list_cut_position(list, head, entry);
}

/**
* list_cut_before - cut a list into two, before given entry
* @list: a new list to add all removed entries
* @head: a list with entries
* @entry: an entry within head, could be the head itself
*
* This helper moves the initial part of @head, up to but
* excluding @entry, from @head to @list. You should pass
* in @entry an element you know is on @head. @list should
* be an empty list or a list you do not care about losing
* its data.
* If @entry == @head, all entries on @head are moved to
* @list.
*/
static inline void list_cut_before(struct list_head *list,
struct list_head *head,
struct list_head *entry)
{
if (head->next == entry) {
INIT_LIST_HEAD(list);
return;
}
list->next = head->next;
list->next->prev = list;
list->prev = entry->prev;
list->prev->next = list;
head->next = entry;
entry->prev = head;
}

static inline void __list_splice(const struct list_head *list,
struct list_head *prev,
struct list_head *next)
Expand Down
4 changes: 4 additions & 0 deletions include/linux/netdevice.h
Original file line number Diff line number Diff line change
Expand Up @@ -2297,6 +2297,9 @@ struct packet_type {
struct net_device *,
struct packet_type *,
struct net_device *);
void (*list_func) (struct list_head *,
struct packet_type *,
struct net_device *);
bool (*id_match)(struct packet_type *ptype,
struct sock *sk);
void *af_packet_priv;
Expand Down Expand Up @@ -3477,6 +3480,7 @@ int netif_rx(struct sk_buff *skb);
int netif_rx_ni(struct sk_buff *skb);
int netif_receive_skb(struct sk_buff *skb);
int netif_receive_skb_core(struct sk_buff *skb);
void netif_receive_skb_list(struct list_head *head);
gro_result_t napi_gro_receive(struct napi_struct *napi, struct sk_buff *skb);
void napi_gro_flush(struct napi_struct *napi, bool flush_old);
struct sk_buff *napi_get_frags(struct napi_struct *napi);
Expand Down
22 changes: 22 additions & 0 deletions include/linux/netfilter.h
Original file line number Diff line number Diff line change
Expand Up @@ -288,6 +288,20 @@ NF_HOOK(uint8_t pf, unsigned int hook, struct net *net, struct sock *sk, struct
return ret;
}

static inline void
NF_HOOK_LIST(uint8_t pf, unsigned int hook, struct net *net, struct sock *sk,
struct list_head *head, struct net_device *in, struct net_device *out,
int (*okfn)(struct net *, struct sock *, struct sk_buff *))
{
struct sk_buff *skb, *next;

list_for_each_entry_safe(skb, next, head, list) {
int ret = nf_hook(pf, hook, net, sk, skb, in, out, okfn);
if (ret != 1)
list_del(&skb->list);
}
}

/* Call setsockopt() */
int nf_setsockopt(struct sock *sk, u_int8_t pf, int optval, char __user *opt,
unsigned int len);
Expand Down Expand Up @@ -369,6 +383,14 @@ NF_HOOK(uint8_t pf, unsigned int hook, struct net *net, struct sock *sk,
return okfn(net, sk, skb);
}

static inline void
NF_HOOK_LIST(uint8_t pf, unsigned int hook, struct net *net, struct sock *sk,
struct list_head *head, struct net_device *in, struct net_device *out,
int (*okfn)(struct net *, struct sock *, struct sk_buff *))
{
/* nothing to do */
}

static inline int nf_hook(u_int8_t pf, unsigned int hook, struct net *net,
struct sock *sk, struct sk_buff *skb,
struct net_device *indev, struct net_device *outdev,
Expand Down
2 changes: 2 additions & 0 deletions include/net/ip.h
Original file line number Diff line number Diff line change
Expand Up @@ -138,6 +138,8 @@ int ip_build_and_send_pkt(struct sk_buff *skb, const struct sock *sk,
struct ip_options_rcu *opt);
int ip_rcv(struct sk_buff *skb, struct net_device *dev, struct packet_type *pt,
struct net_device *orig_dev);
void ip_list_rcv(struct list_head *head, struct packet_type *pt,
struct net_device *orig_dev);
int ip_local_deliver(struct sk_buff *skb);
int ip_mr_input(struct sk_buff *skb);
int ip_output(struct net *net, struct sock *sk, struct sk_buff *skb);
Expand Down
7 changes: 7 additions & 0 deletions include/trace/events/net.h
Original file line number Diff line number Diff line change
Expand Up @@ -223,6 +223,13 @@ DEFINE_EVENT(net_dev_rx_verbose_template, netif_receive_skb_entry,
TP_ARGS(skb)
);

DEFINE_EVENT(net_dev_rx_verbose_template, netif_receive_skb_list_entry,

TP_PROTO(const struct sk_buff *skb),

TP_ARGS(skb)
);

DEFINE_EVENT(net_dev_rx_verbose_template, netif_rx_entry,

TP_PROTO(const struct sk_buff *skb),
Expand Down
Loading

0 comments on commit 2d1b138

Please sign in to comment.