Skip to content

Commit

Permalink
net: Revert "net: avoid one atomic operation in skb_clone()"
Browse files Browse the repository at this point in the history
Not sure what I was thinking, but doing anything after
releasing a refcount is suicidal or/and embarrassing.

By the time we set skb->fclone to SKB_FCLONE_FREE, another cpu
could have released last reference and freed whole skb.

We potentially corrupt memory or trap if CONFIG_DEBUG_PAGEALLOC is set.

Reported-by: Chris Mason <clm@fb.com>
Fixes: ce1a4ea ("net: avoid one atomic operation in skb_clone()")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Sabrina Dubroca <sd@queasysnail.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
  • Loading branch information
Eric Dumazet authored and David S. Miller committed Nov 21, 2014
1 parent 892d6eb commit e7820e3
Showing 1 changed file with 6 additions and 17 deletions.
23 changes: 6 additions & 17 deletions net/core/skbuff.c
Original file line number Diff line number Diff line change
Expand Up @@ -552,20 +552,13 @@ static void kfree_skbmem(struct sk_buff *skb)
case SKB_FCLONE_CLONE:
fclones = container_of(skb, struct sk_buff_fclones, skb2);

/* Warning : We must perform the atomic_dec_and_test() before
* setting skb->fclone back to SKB_FCLONE_FREE, otherwise
* skb_clone() could set clone_ref to 2 before our decrement.
* Anyway, if we are going to free the structure, no need to
* rewrite skb->fclone.
/* The clone portion is available for
* fast-cloning again.
*/
if (atomic_dec_and_test(&fclones->fclone_ref)) {
skb->fclone = SKB_FCLONE_FREE;

if (atomic_dec_and_test(&fclones->fclone_ref))
kmem_cache_free(skbuff_fclone_cache, fclones);
} else {
/* The clone portion is available for
* fast-cloning again.
*/
skb->fclone = SKB_FCLONE_FREE;
}
break;
}
}
Expand Down Expand Up @@ -887,11 +880,7 @@ struct sk_buff *skb_clone(struct sk_buff *skb, gfp_t gfp_mask)
if (skb->fclone == SKB_FCLONE_ORIG &&
n->fclone == SKB_FCLONE_FREE) {
n->fclone = SKB_FCLONE_CLONE;
/* As our fastclone was free, clone_ref must be 1 at this point.
* We could use atomic_inc() here, but it is faster
* to set the final value.
*/
atomic_set(&fclones->fclone_ref, 2);
atomic_inc(&fclones->fclone_ref);
} else {
if (skb_pfmemalloc(skb))
gfp_mask |= __GFP_MEMALLOC;
Expand Down

0 comments on commit e7820e3

Please sign in to comment.