Skip to content

Commit

Permalink
xsk: fix batch alloc API on non-coherent systems
Browse files Browse the repository at this point in the history
In cases when synchronizing DMA operations is necessary,
xsk_buff_alloc_batch() returns a single buffer instead of the requested
count. This puts the pressure on drivers that use batch API as they have
to check for this corner case on their side and take care of allocations
by themselves, which feels counter productive. Let us improve the core
by looping over xp_alloc() @max times when slow path needs to be taken.

Another issue with current interface, as spotted and fixed by Dries, was
that when driver called xsk_buff_alloc_batch() with @max == 0, for slow
path case it still allocated and returned a single buffer, which should
not happen. By introducing the logic from first paragraph we kill two
birds with one stone and address this problem as well.

Fixes: 47e4075 ("xsk: Batched buffer allocation for the pool")
Reported-and-tested-by: Dries De Winter <ddewinter@synamedia.com>
Co-developed-by: Dries De Winter <ddewinter@synamedia.com>
Signed-off-by: Dries De Winter <ddewinter@synamedia.com>
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Acked-by: Magnus Karlsson <magnus.karlsson@intel.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Link: https://patch.msgid.link/20240911191019.296480-1-maciej.fijalkowski@intel.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
  • Loading branch information
Maciej Fijalkowski authored and Jakub Kicinski committed Sep 14, 2024
1 parent 1f2e900 commit 4144a10
Showing 1 changed file with 18 additions and 7 deletions.
25 changes: 18 additions & 7 deletions net/xdp/xsk_buff_pool.c
Original file line number Diff line number Diff line change
Expand Up @@ -623,20 +623,31 @@ static u32 xp_alloc_reused(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u3
return nb_entries;
}

u32 xp_alloc_batch(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u32 max)
static u32 xp_alloc_slow(struct xsk_buff_pool *pool, struct xdp_buff **xdp,
u32 max)
{
u32 nb_entries1 = 0, nb_entries2;
int i;

if (unlikely(pool->dev && dma_dev_need_sync(pool->dev))) {
for (i = 0; i < max; i++) {
struct xdp_buff *buff;

/* Slow path */
buff = xp_alloc(pool);
if (buff)
*xdp = buff;
return !!buff;
if (unlikely(!buff))
return i;
*xdp = buff;
xdp++;
}

return max;
}

u32 xp_alloc_batch(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u32 max)
{
u32 nb_entries1 = 0, nb_entries2;

if (unlikely(pool->dev && dma_dev_need_sync(pool->dev)))
return xp_alloc_slow(pool, xdp, max);

if (unlikely(pool->free_list_cnt)) {
nb_entries1 = xp_alloc_reused(pool, xdp, max);
if (nb_entries1 == max)
Expand Down

0 comments on commit 4144a10

Please sign in to comment.