Skip to content

Commit

Permalink
Merge branch 'net-page_pool-add-netlink-based-introspection'
Browse files Browse the repository at this point in the history
Jakub Kicinski says:

====================
net: page_pool: add netlink-based introspection

We recently started to deploy newer kernels / drivers at Meta,
making significant use of page pools for the first time.
We immediately run into page pool leaks both real and false positive
warnings. As Eric pointed out/predicted there's no guarantee that
applications will read / close their sockets so a page pool page
may be stuck in a socket (but not leaked) forever. This happens
a lot in our fleet. Most of these are obviously due to application
bugs but we should not be printing kernel warnings due to minor
application resource leaks.

Conversely the page pool memory may get leaked at runtime, and
we have no way to detect / track that, unless someone reconfigures
the NIC and destroys the page pools which leaked the pages.

The solution presented here is to expose the memory use of page
pools via netlink. This allows for continuous monitoring of memory
used by page pools, regardless if they were destroyed or not.
Sample in patch 15 can print the memory use and recycling
efficiency:

$ ./page-pool
    eth0[2]	page pools: 10 (zombies: 0)
		refs: 41984 bytes: 171966464 (refs: 0 bytes: 0)
		recycling: 90.3% (alloc: 656:397681 recycle: 89652:270201)

v4:
 - use dev_net(netdev)->loopback_dev
 - extend inflight doc
v3: https://lore.kernel.org/all/20231122034420.1158898-1-kuba@kernel.org/
 - ID is still here, can't decide if it matters
 - rename destroyed -> detach-time, good enough?
 - fix build for netsec
v2: https://lore.kernel.org/r/20231121000048.789613-1-kuba@kernel.org
 - hopefully fix build with PAGE_POOL=n
v1: https://lore.kernel.org/all/20231024160220.3973311-1-kuba@kernel.org/
 - The main change compared to the RFC is that the API now exposes
   outstanding references and byte counts even for "live" page pools.
   The warning is no longer printed if page pool is accessible via netlink.
RFC: https://lore.kernel.org/all/20230816234303.3786178-1-kuba@kernel.org/
====================

Link: https://lore.kernel.org/r/20231126230740.2148636-1-kuba@kernel.org
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
  • Loading branch information
Paolo Abeni committed Nov 28, 2023
2 parents a214724 + 637567e commit a379972
Show file tree
Hide file tree
Showing 25 changed files with 1,574 additions and 33 deletions.
172 changes: 172 additions & 0 deletions Documentation/netlink/specs/netdev.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -86,6 +86,112 @@ attribute-sets:
See Documentation/networking/xdp-rx-metadata.rst for more details.
type: u64
enum: xdp-rx-metadata
-
name: page-pool
attributes:
-
name: id
doc: Unique ID of a Page Pool instance.
type: uint
checks:
min: 1
max: u32-max
-
name: ifindex
doc: |
ifindex of the netdev to which the pool belongs.
May be reported as 0 if the page pool was allocated for a netdev
which got destroyed already (page pools may outlast their netdevs
because they wait for all memory to be returned).
type: u32
checks:
min: 1
max: s32-max
-
name: napi-id
doc: Id of NAPI using this Page Pool instance.
type: uint
checks:
min: 1
max: u32-max
-
name: inflight
type: uint
doc: |
Number of outstanding references to this page pool (allocated
but yet to be freed pages). Allocated pages may be held in
socket receive queues, driver receive ring, page pool recycling
ring, the page pool cache, etc.
-
name: inflight-mem
type: uint
doc: |
Amount of memory held by inflight pages.
-
name: detach-time
type: uint
doc: |
Seconds in CLOCK_BOOTTIME of when Page Pool was detached by
the driver. Once detached Page Pool can no longer be used to
allocate memory.
Page Pools wait for all the memory allocated from them to be freed
before truly disappearing. "Detached" Page Pools cannot be
"re-attached", they are just waiting to disappear.
Attribute is absent if Page Pool has not been detached, and
can still be used to allocate new memory.
-
name: page-pool-info
subset-of: page-pool
attributes:
-
name: id
-
name: ifindex
-
name: page-pool-stats
doc: |
Page pool statistics, see docs for struct page_pool_stats
for information about individual statistics.
attributes:
-
name: info
doc: Page pool identifying information.
type: nest
nested-attributes: page-pool-info
-
name: alloc-fast
type: uint
value: 8 # reserve some attr ids in case we need more metadata later
-
name: alloc-slow
type: uint
-
name: alloc-slow-high-order
type: uint
-
name: alloc-empty
type: uint
-
name: alloc-refill
type: uint
-
name: alloc-waive
type: uint
-
name: recycle-cached
type: uint
-
name: recycle-cache-full
type: uint
-
name: recycle-ring
type: uint
-
name: recycle-ring-full
type: uint
-
name: recycle-released-refcnt
type: uint

operations:
list:
Expand Down Expand Up @@ -120,8 +226,74 @@ operations:
doc: Notification about device configuration being changed.
notify: dev-get
mcgrp: mgmt
-
name: page-pool-get
doc: |
Get / dump information about Page Pools.
(Only Page Pools associated with a net_device can be listed.)
attribute-set: page-pool
do:
request:
attributes:
- id
reply: &pp-reply
attributes:
- id
- ifindex
- napi-id
- inflight
- inflight-mem
- detach-time
dump:
reply: *pp-reply
config-cond: page-pool
-
name: page-pool-add-ntf
doc: Notification about page pool appearing.
notify: page-pool-get
mcgrp: page-pool
config-cond: page-pool
-
name: page-pool-del-ntf
doc: Notification about page pool disappearing.
notify: page-pool-get
mcgrp: page-pool
config-cond: page-pool
-
name: page-pool-change-ntf
doc: Notification about page pool configuration being changed.
notify: page-pool-get
mcgrp: page-pool
config-cond: page-pool
-
name: page-pool-stats-get
doc: Get page pool statistics.
attribute-set: page-pool-stats
do:
request:
attributes:
- info
reply: &pp-stats-reply
attributes:
- info
- alloc-fast
- alloc-slow
- alloc-slow-high-order
- alloc-empty
- alloc-refill
- alloc-waive
- recycle-cached
- recycle-cache-full
- recycle-ring
- recycle-ring-full
- recycle-released-refcnt
dump:
reply: *pp-stats-reply
config-cond: page-pool-stats

mcast-groups:
list:
-
name: mgmt
-
name: page-pool
10 changes: 8 additions & 2 deletions Documentation/networking/page_pool.rst
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,11 @@ Architecture overview
| Fast cache | | ptr-ring cache |
+-----------------+ +------------------+
Monitoring
==========
Information about page pools on the system can be accessed via the netdev
genetlink family (see Documentation/netlink/specs/netdev.yaml).

API interface
=============
The number of pools created **must** match the number of hardware queues
Expand Down Expand Up @@ -107,8 +112,9 @@ page_pool_get_stats() and structures described below are available.
It takes a pointer to a ``struct page_pool`` and a pointer to a struct
page_pool_stats allocated by the caller.

The API will fill in the provided struct page_pool_stats with
statistics about the page_pool.
Older drivers expose page pool statistics via ethtool or debugfs.
The same statistics are accessible via the netlink netdev family
in a driver-independent fashion.

.. kernel-doc:: include/net/page_pool/types.h
:identifiers: struct page_pool_recycle_stats
Expand Down
1 change: 1 addition & 0 deletions drivers/net/ethernet/broadcom/bnxt/bnxt.c
Original file line number Diff line number Diff line change
Expand Up @@ -3331,6 +3331,7 @@ static int bnxt_alloc_rx_page_pool(struct bnxt *bp,
pp.pool_size += bp->rx_ring_size;
pp.nid = dev_to_node(&bp->pdev->dev);
pp.napi = &rxr->bnapi->napi;
pp.netdev = bp->dev;
pp.dev = &bp->pdev->dev;
pp.dma_dir = bp->rx_dir;
pp.max_len = PAGE_SIZE;
Expand Down
1 change: 1 addition & 0 deletions drivers/net/ethernet/mellanox/mlx5/core/en_main.c
Original file line number Diff line number Diff line change
Expand Up @@ -902,6 +902,7 @@ static int mlx5e_alloc_rq(struct mlx5e_params *params,
pp_params.nid = node;
pp_params.dev = rq->pdev;
pp_params.napi = rq->cq.napi;
pp_params.netdev = rq->netdev;
pp_params.dma_dir = rq->buff.map_dir;
pp_params.max_len = PAGE_SIZE;

Expand Down
1 change: 1 addition & 0 deletions drivers/net/ethernet/microsoft/mana/mana_en.c
Original file line number Diff line number Diff line change
Expand Up @@ -2137,6 +2137,7 @@ static int mana_create_page_pool(struct mana_rxq *rxq, struct gdma_context *gc)
pprm.pool_size = RX_BUFFERS_PER_QUEUE;
pprm.nid = gc->numa_node;
pprm.napi = &rxq->rx_cq.napi;
pprm.netdev = rxq->ndev;

rxq->page_pool = page_pool_create(&pprm);

Expand Down
2 changes: 2 additions & 0 deletions drivers/net/ethernet/socionext/netsec.c
Original file line number Diff line number Diff line change
Expand Up @@ -1302,6 +1302,8 @@ static int netsec_setup_rx_dring(struct netsec_priv *priv)
.dma_dir = xdp_prog ? DMA_BIDIRECTIONAL : DMA_FROM_DEVICE,
.offset = NETSEC_RXBUF_HEADROOM,
.max_len = NETSEC_RX_BUF_SIZE,
.napi = &priv->napi,
.netdev = priv->ndev,
};
int i, err;

Expand Down
20 changes: 20 additions & 0 deletions include/linux/list.h
Original file line number Diff line number Diff line change
Expand Up @@ -1119,6 +1119,26 @@ static inline void hlist_move_list(struct hlist_head *old,
old->first = NULL;
}

/**
* hlist_splice_init() - move all entries from one list to another
* @from: hlist_head from which entries will be moved
* @last: last entry on the @from list
* @to: hlist_head to which entries will be moved
*
* @to can be empty, @from must contain at least @last.
*/
static inline void hlist_splice_init(struct hlist_head *from,
struct hlist_node *last,
struct hlist_head *to)
{
if (to->first)
to->first->pprev = &last->next;
last->next = to->first;
to->first = from->first;
from->first->pprev = &to->first;
from->first = NULL;
}

#define hlist_entry(ptr, type, member) container_of(ptr,type,member)

#define hlist_for_each(pos, head) \
Expand Down
4 changes: 4 additions & 0 deletions include/linux/netdevice.h
Original file line number Diff line number Diff line change
Expand Up @@ -2447,6 +2447,10 @@ struct net_device {
#if IS_ENABLED(CONFIG_DPLL)
struct dpll_pin *dpll_pin;
#endif
#if IS_ENABLED(CONFIG_PAGE_POOL)
/** @page_pools: page pools created for this netdevice */
struct hlist_head page_pools;
#endif
};
#define to_net_dev(d) container_of(d, struct net_device, dev)

Expand Down
2 changes: 2 additions & 0 deletions include/linux/poison.h
Original file line number Diff line number Diff line change
Expand Up @@ -83,6 +83,8 @@

/********** net/core/skbuff.c **********/
#define SKB_LIST_POISON_NEXT ((void *)(0x800 + POISON_POINTER_DELTA))
/********** net/ **********/
#define NET_PTR_POISON ((void *)(0x801 + POISON_POINTER_DELTA))

/********** kernel/bpf/ **********/
#define BPF_PTR_POISON ((void *)(0xeB9FUL + POISON_POINTER_DELTA))
Expand Down
8 changes: 2 additions & 6 deletions include/net/page_pool/helpers.h
Original file line number Diff line number Diff line change
Expand Up @@ -55,16 +55,12 @@
#include <net/page_pool/types.h>

#ifdef CONFIG_PAGE_POOL_STATS
/* Deprecated driver-facing API, use netlink instead */
int page_pool_ethtool_stats_get_count(void);
u8 *page_pool_ethtool_stats_get_strings(u8 *data);
u64 *page_pool_ethtool_stats_get(u64 *data, void *stats);

/*
* Drivers that wish to harvest page pool stats and report them to users
* (perhaps via ethtool, debugfs, or another mechanism) can allocate a
* struct page_pool_stats call page_pool_get_stats to get stats for the specified pool.
*/
bool page_pool_get_stats(struct page_pool *pool,
bool page_pool_get_stats(const struct page_pool *pool,
struct page_pool_stats *stats);
#else
static inline int page_pool_ethtool_stats_get_count(void)
Expand Down
10 changes: 10 additions & 0 deletions include/net/page_pool/types.h
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@

#include <linux/dma-direction.h>
#include <linux/ptr_ring.h>
#include <linux/types.h>

#define PP_FLAG_DMA_MAP BIT(0) /* Should page_pool do the DMA
* map/unmap
Expand Down Expand Up @@ -48,6 +49,7 @@ struct pp_alloc_cache {
* @pool_size: size of the ptr_ring
* @nid: NUMA node id to allocate from pages from
* @dev: device, for DMA pre-mapping purposes
* @netdev: netdev this pool will serve (leave as NULL if none or multiple)
* @napi: NAPI which is the sole consumer of pages, otherwise NULL
* @dma_dir: DMA mapping direction
* @max_len: max DMA sync memory size for PP_FLAG_DMA_SYNC_DEV
Expand All @@ -66,6 +68,7 @@ struct page_pool_params {
unsigned int offset;
);
struct_group_tagged(page_pool_params_slow, slow,
struct net_device *netdev;
/* private: used by test code only */
void (*init_callback)(struct page *page, void *arg);
void *init_arg;
Expand Down Expand Up @@ -187,6 +190,13 @@ struct page_pool {

/* Slow/Control-path information follows */
struct page_pool_params_slow slow;
/* User-facing fields, protected by page_pools_lock */
struct {
struct hlist_node list;
u64 detach_time;
u32 napi_id;
u32 id;
} user;
};

struct page *page_pool_alloc_pages(struct page_pool *pool, gfp_t gfp);
Expand Down
Loading

0 comments on commit a379972

Please sign in to comment.