Skip to content

Commit

Permalink
net: fec: handle page_pool_dev_alloc_pages error
Browse files Browse the repository at this point in the history
The fec_enet_update_cbd function calls page_pool_dev_alloc_pages but did
not handle the case when it returned NULL. There was a WARN_ON(!new_page)
but it would still proceed to use the NULL pointer and then crash.

This case does seem somewhat rare but when the system is under memory
pressure it can happen. One case where I can duplicate this with some
frequency is when writing over a smbd share to a SATA HDD attached to an
imx6q.

Setting /proc/sys/vm/min_free_kbytes to higher values also seems to solve
the problem for my test case. But it still seems wrong that the fec driver
ignores the memory allocation error and can crash.

This commit handles the allocation error by dropping the current packet.

Fixes: 95698ff ("net: fec: using page pool to manage RX buffers")
Signed-off-by: Kevin Groeneveld <kgroeneveld@lenbrook.com>
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
Reviewed-by: Wei Fang <wei.fang@nxp.com>
Link: https://patch.msgid.link/20250113154846.1765414-1-kgroeneveld@lenbrook.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
  • Loading branch information
Kevin Groeneveld authored and Jakub Kicinski committed Jan 15, 2025
1 parent f0d0277 commit 001ba09
Showing 1 changed file with 14 additions and 5 deletions.
19 changes: 14 additions & 5 deletions drivers/net/ethernet/freescale/fec_main.c
Original file line number Diff line number Diff line change
Expand Up @@ -1591,19 +1591,22 @@ static void fec_enet_tx(struct net_device *ndev, int budget)
fec_enet_tx_queue(ndev, i, budget);
}

static void fec_enet_update_cbd(struct fec_enet_priv_rx_q *rxq,
static int fec_enet_update_cbd(struct fec_enet_priv_rx_q *rxq,
struct bufdesc *bdp, int index)
{
struct page *new_page;
dma_addr_t phys_addr;

new_page = page_pool_dev_alloc_pages(rxq->page_pool);
WARN_ON(!new_page);
rxq->rx_skb_info[index].page = new_page;
if (unlikely(!new_page))
return -ENOMEM;

rxq->rx_skb_info[index].page = new_page;
rxq->rx_skb_info[index].offset = FEC_ENET_XDP_HEADROOM;
phys_addr = page_pool_get_dma_addr(new_page) + FEC_ENET_XDP_HEADROOM;
bdp->cbd_bufaddr = cpu_to_fec32(phys_addr);

return 0;
}

static u32
Expand Down Expand Up @@ -1698,6 +1701,7 @@ fec_enet_rx_queue(struct net_device *ndev, int budget, u16 queue_id)
int cpu = smp_processor_id();
struct xdp_buff xdp;
struct page *page;
__fec32 cbd_bufaddr;
u32 sub_len = 4;

#if !defined(CONFIG_M5272)
Expand Down Expand Up @@ -1766,12 +1770,17 @@ fec_enet_rx_queue(struct net_device *ndev, int budget, u16 queue_id)

index = fec_enet_get_bd_index(bdp, &rxq->bd);
page = rxq->rx_skb_info[index].page;
cbd_bufaddr = bdp->cbd_bufaddr;
if (fec_enet_update_cbd(rxq, bdp, index)) {
ndev->stats.rx_dropped++;
goto rx_processing_done;
}

dma_sync_single_for_cpu(&fep->pdev->dev,
fec32_to_cpu(bdp->cbd_bufaddr),
fec32_to_cpu(cbd_bufaddr),
pkt_len,
DMA_FROM_DEVICE);
prefetch(page_address(page));
fec_enet_update_cbd(rxq, bdp, index);

if (xdp_prog) {
xdp_buff_clear_frags_flag(&xdp);
Expand Down

0 comments on commit 001ba09

Please sign in to comment.