Skip to content

Commit

Permalink
udp: ipv4: must add synchronization in udp_sk_rx_dst_set()
Browse files Browse the repository at this point in the history
Unlike TCP, UDP input path does not hold the socket lock.

Before messing with sk->sk_rx_dst, we must use a spinlock, otherwise
multiple cpus could leak a refcount.

This patch also takes care of renewing a stale dst entry.
(When the sk->sk_rx_dst would not be used by IP early demux)

Fixes: 421b388 ("udp: ipv4: Add udp early demux")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Shawn Bohrer <sbohrer@rgmadvisors.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
  • Loading branch information
Eric Dumazet authored and David S. Miller committed Dec 12, 2013
1 parent a1bf175 commit 9750223
Showing 1 changed file with 16 additions and 6 deletions.
22 changes: 16 additions & 6 deletions net/ipv4/udp.c
Original file line number Diff line number Diff line change
Expand Up @@ -1599,12 +1599,21 @@ static void flush_stack(struct sock **stack, unsigned int count,
kfree_skb(skb1);
}

static void udp_sk_rx_dst_set(struct sock *sk, const struct sk_buff *skb)
/* For TCP sockets, sk_rx_dst is protected by socket lock
* For UDP, we use sk_dst_lock to guard against concurrent changes.
*/
static void udp_sk_rx_dst_set(struct sock *sk, struct dst_entry *dst)
{
struct dst_entry *dst = skb_dst(skb);
struct dst_entry *old;

dst_hold(dst);
sk->sk_rx_dst = dst;
spin_lock(&sk->sk_dst_lock);
old = sk->sk_rx_dst;
if (likely(old != dst)) {
dst_hold(dst);
sk->sk_rx_dst = dst;
dst_release(old);
}
spin_unlock(&sk->sk_dst_lock);
}

/*
Expand Down Expand Up @@ -1737,10 +1746,11 @@ int __udp4_lib_rcv(struct sk_buff *skb, struct udp_table *udptable,

sk = skb_steal_sock(skb);
if (sk) {
struct dst_entry *dst = skb_dst(skb);
int ret;

if (unlikely(sk->sk_rx_dst == NULL))
udp_sk_rx_dst_set(sk, skb);
if (unlikely(sk->sk_rx_dst != dst))
udp_sk_rx_dst_set(sk, dst);

ret = udp_queue_rcv_skb(sk, skb);
sock_put(sk);
Expand Down

0 comments on commit 9750223

Please sign in to comment.