Skip to content

Commit

Permalink
Merge branch 'bpf_sk_assign'
Browse files Browse the repository at this point in the history
Joe Stringer says:

====================
Introduce a new helper that allows assigning a previously-found socket
to the skb as the packet is received towards the stack, to cause the
stack to guide the packet towards that socket subject to local routing
configuration. The intention is to support TProxy use cases more
directly from eBPF programs attached at TC ingress, to simplify and
streamline Linux stack configuration in scale environments with Cilium.

Normally in ip{,6}_rcv_core(), the skb will be orphaned, dropping any
existing socket reference associated with the skb. Existing tproxy
implementations in netfilter get around this restriction by running the
tproxy logic after ip_rcv_core() in the PREROUTING table. However, this
is not an option for TC-based logic (including eBPF programs attached at
TC ingress).

This series introduces the BPF helper bpf_sk_assign() to associate the
socket with the skb on the ingress path as the packet is passed up the
stack. The initial patch in the series simply takes a reference on the
socket to ensure safety, but later patches relax this for listen
sockets.

To ensure delivery to the relevant socket, we still consult the routing
table, for full examples of how to configure see the tests in patch #5;
the simplest form of the route would look like this:

  $ ip route add local default dev lo

This series is laid out as follows:
* Patch 1 extends the eBPF API to add sk_assign() and defines a new
  socket free function to allow the later paths to understand when the
  socket associated with the skb should be kept through receive.
* Patches 2-3 optimize the receive path to avoid taking a reference on
  listener sockets during receive.
* Patches 4-5 extends the selftests with examples of the new
  functionality and validation of correct behaviour.

Changes since v4:
* Fix build with CONFIG_INET disabled
* Rebase

Changes since v3:
* Use sock_gen_put() directly instead of sock_edemux() from sock_pfree()
* Commit message wording fixups
* Add acks from Martin, Lorenz
* Rebase

Changes since v2:
* Add selftests for UDP socket redirection
* Drop the early demux optimization patch (defer for more testing)
* Fix check for orphaning after TC act return
* Tidy up the tests to clean up properly and be less noisy.

Changes since v1:
* Replace the metadata_dst approach with using the skb->destructor to
  determine whether the socket has been prefetched. This is much
  simpler.
* Avoid taking a reference on listener sockets during receive
* Restrict assigning sockets across namespaces
* Restrict assigning SO_REUSEPORT sockets
* Fix cookie usage for socket dst check
* Rebase the tests against test_progs infrastructure
* Tidy up commit messages
====================

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
  • Loading branch information
Alexei Starovoitov committed Mar 30, 2020
2 parents b49e42a + 8a02a17 commit c58b155
Show file tree
Hide file tree
Showing 14 changed files with 662 additions and 24 deletions.
3 changes: 1 addition & 2 deletions include/net/inet6_hashtables.h
Original file line number Diff line number Diff line change
Expand Up @@ -85,9 +85,8 @@ static inline struct sock *__inet6_lookup_skb(struct inet_hashinfo *hashinfo,
int iif, int sdif,
bool *refcounted)
{
struct sock *sk = skb_steal_sock(skb);
struct sock *sk = skb_steal_sock(skb, refcounted);

*refcounted = true;
if (sk)
return sk;

Expand Down
3 changes: 1 addition & 2 deletions include/net/inet_hashtables.h
Original file line number Diff line number Diff line change
Expand Up @@ -379,10 +379,9 @@ static inline struct sock *__inet_lookup_skb(struct inet_hashinfo *hashinfo,
const int sdif,
bool *refcounted)
{
struct sock *sk = skb_steal_sock(skb);
struct sock *sk = skb_steal_sock(skb, refcounted);
const struct iphdr *iph = ip_hdr(skb);

*refcounted = true;
if (sk)
return sk;

Expand Down
46 changes: 37 additions & 9 deletions include/net/sock.h
Original file line number Diff line number Diff line change
Expand Up @@ -1659,6 +1659,7 @@ void sock_rfree(struct sk_buff *skb);
void sock_efree(struct sk_buff *skb);
#ifdef CONFIG_INET
void sock_edemux(struct sk_buff *skb);
void sock_pfree(struct sk_buff *skb);
#else
#define sock_edemux sock_efree
#endif
Expand Down Expand Up @@ -2526,16 +2527,14 @@ void sock_net_set(struct sock *sk, struct net *net)
write_pnet(&sk->sk_net, net);
}

static inline struct sock *skb_steal_sock(struct sk_buff *skb)
static inline bool
skb_sk_is_prefetched(struct sk_buff *skb)
{
if (skb->sk) {
struct sock *sk = skb->sk;

skb->destructor = NULL;
skb->sk = NULL;
return sk;
}
return NULL;
#ifdef CONFIG_INET
return skb->destructor == sock_pfree;
#else
return false;
#endif /* CONFIG_INET */
}

/* This helper checks if a socket is a full socket,
Expand All @@ -2546,6 +2545,35 @@ static inline bool sk_fullsock(const struct sock *sk)
return (1 << sk->sk_state) & ~(TCPF_TIME_WAIT | TCPF_NEW_SYN_RECV);
}

static inline bool
sk_is_refcounted(struct sock *sk)
{
/* Only full sockets have sk->sk_flags. */
return !sk_fullsock(sk) || !sock_flag(sk, SOCK_RCU_FREE);
}

/**
* skb_steal_sock
* @skb to steal the socket from
* @refcounted is set to true if the socket is reference-counted
*/
static inline struct sock *
skb_steal_sock(struct sk_buff *skb, bool *refcounted)
{
if (skb->sk) {
struct sock *sk = skb->sk;

*refcounted = true;
if (skb_sk_is_prefetched(skb))
*refcounted = sk_is_refcounted(sk);
skb->destructor = NULL;
skb->sk = NULL;
return sk;
}
*refcounted = false;
return NULL;
}

/* Checks if this SKB belongs to an HW offloaded socket
* and whether any SW fallbacks are required based on dev.
* Check decrypted mark in case skb_orphan() cleared socket.
Expand Down
25 changes: 24 additions & 1 deletion include/uapi/linux/bpf.h
Original file line number Diff line number Diff line change
Expand Up @@ -2983,6 +2983,28 @@ union bpf_attr {
* **bpf_get_current_cgroup_id**\ ().
* Return
* The id is returned or 0 in case the id could not be retrieved.
*
* int bpf_sk_assign(struct sk_buff *skb, struct bpf_sock *sk, u64 flags)
* Description
* Assign the *sk* to the *skb*. When combined with appropriate
* routing configuration to receive the packet towards the socket,
* will cause *skb* to be delivered to the specified socket.
* Subsequent redirection of *skb* via **bpf_redirect**\ (),
* **bpf_clone_redirect**\ () or other methods outside of BPF may
* interfere with successful delivery to the socket.
*
* This operation is only valid from TC ingress path.
*
* The *flags* argument must be zero.
* Return
* 0 on success, or a negative errno in case of failure.
*
* * **-EINVAL** Unsupported flags specified.
* * **-ENOENT** Socket is unavailable for assignment.
* * **-ENETUNREACH** Socket is unreachable (wrong netns).
* * **-EOPNOTSUPP** Unsupported operation, for example a
* call from outside of TC ingress.
* * **-ESOCKTNOSUPPORT** Socket type not supported (reuseport).
*/
#define __BPF_FUNC_MAPPER(FN) \
FN(unspec), \
Expand Down Expand Up @@ -3108,7 +3130,8 @@ union bpf_attr {
FN(get_ns_current_pid_tgid), \
FN(xdp_output), \
FN(get_netns_cookie), \
FN(get_current_ancestor_cgroup_id),
FN(get_current_ancestor_cgroup_id), \
FN(sk_assign),

/* integer value in 'imm' field of BPF_CALL instruction selects which helper
* function eBPF program intends to call
Expand Down
35 changes: 33 additions & 2 deletions net/core/filter.c
Original file line number Diff line number Diff line change
Expand Up @@ -5401,8 +5401,7 @@ static const struct bpf_func_proto bpf_sk_lookup_udp_proto = {

BPF_CALL_1(bpf_sk_release, struct sock *, sk)
{
/* Only full sockets have sk->sk_flags. */
if (!sk_fullsock(sk) || !sock_flag(sk, SOCK_RCU_FREE))
if (sk_is_refcounted(sk))
sock_gen_put(sk);
return 0;
}
Expand Down Expand Up @@ -5918,6 +5917,36 @@ static const struct bpf_func_proto bpf_tcp_gen_syncookie_proto = {
.arg5_type = ARG_CONST_SIZE,
};

BPF_CALL_3(bpf_sk_assign, struct sk_buff *, skb, struct sock *, sk, u64, flags)
{
if (flags != 0)
return -EINVAL;
if (!skb_at_tc_ingress(skb))
return -EOPNOTSUPP;
if (unlikely(dev_net(skb->dev) != sock_net(sk)))
return -ENETUNREACH;
if (unlikely(sk->sk_reuseport))
return -ESOCKTNOSUPPORT;
if (sk_is_refcounted(sk) &&
unlikely(!refcount_inc_not_zero(&sk->sk_refcnt)))
return -ENOENT;

skb_orphan(skb);
skb->sk = sk;
skb->destructor = sock_pfree;

return 0;
}

static const struct bpf_func_proto bpf_sk_assign_proto = {
.func = bpf_sk_assign,
.gpl_only = false,
.ret_type = RET_INTEGER,
.arg1_type = ARG_PTR_TO_CTX,
.arg2_type = ARG_PTR_TO_SOCK_COMMON,
.arg3_type = ARG_ANYTHING,
};

#endif /* CONFIG_INET */

bool bpf_helper_changes_pkt_data(void *func)
Expand Down Expand Up @@ -6249,6 +6278,8 @@ tc_cls_act_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
return &bpf_skb_ecn_set_ce_proto;
case BPF_FUNC_tcp_gen_syncookie:
return &bpf_tcp_gen_syncookie_proto;
case BPF_FUNC_sk_assign:
return &bpf_sk_assign_proto;
#endif
default:
return bpf_base_func_proto(func_id);
Expand Down
12 changes: 12 additions & 0 deletions net/core/sock.c
Original file line number Diff line number Diff line change
Expand Up @@ -2071,6 +2071,18 @@ void sock_efree(struct sk_buff *skb)
}
EXPORT_SYMBOL(sock_efree);

/* Buffer destructor for prefetch/receive path where reference count may
* not be held, e.g. for listen sockets.
*/
#ifdef CONFIG_INET
void sock_pfree(struct sk_buff *skb)
{
if (sk_is_refcounted(skb->sk))
sock_gen_put(skb->sk);
}
EXPORT_SYMBOL(sock_pfree);
#endif /* CONFIG_INET */

kuid_t sock_i_uid(struct sock *sk)
{
kuid_t uid;
Expand Down
3 changes: 2 additions & 1 deletion net/ipv4/ip_input.c
Original file line number Diff line number Diff line change
Expand Up @@ -509,7 +509,8 @@ static struct sk_buff *ip_rcv_core(struct sk_buff *skb, struct net *net)
IPCB(skb)->iif = skb->skb_iif;

/* Must drop socket now because of tproxy. */
skb_orphan(skb);
if (!skb_sk_is_prefetched(skb))
skb_orphan(skb);

return skb;

Expand Down
6 changes: 4 additions & 2 deletions net/ipv4/udp.c
Original file line number Diff line number Diff line change
Expand Up @@ -2288,6 +2288,7 @@ int __udp4_lib_rcv(struct sk_buff *skb, struct udp_table *udptable,
struct rtable *rt = skb_rtable(skb);
__be32 saddr, daddr;
struct net *net = dev_net(skb->dev);
bool refcounted;

/*
* Validate the packet.
Expand All @@ -2313,7 +2314,7 @@ int __udp4_lib_rcv(struct sk_buff *skb, struct udp_table *udptable,
if (udp4_csum_init(skb, uh, proto))
goto csum_error;

sk = skb_steal_sock(skb);
sk = skb_steal_sock(skb, &refcounted);
if (sk) {
struct dst_entry *dst = skb_dst(skb);
int ret;
Expand All @@ -2322,7 +2323,8 @@ int __udp4_lib_rcv(struct sk_buff *skb, struct udp_table *udptable,
udp_sk_rx_dst_set(sk, dst);

ret = udp_unicast_rcv_skb(sk, skb, uh);
sock_put(sk);
if (refcounted)
sock_put(sk);
return ret;
}

Expand Down
3 changes: 2 additions & 1 deletion net/ipv6/ip6_input.c
Original file line number Diff line number Diff line change
Expand Up @@ -285,7 +285,8 @@ static struct sk_buff *ip6_rcv_core(struct sk_buff *skb, struct net_device *dev,
rcu_read_unlock();

/* Must drop socket now because of tproxy. */
skb_orphan(skb);
if (!skb_sk_is_prefetched(skb))
skb_orphan(skb);

return skb;
err:
Expand Down
9 changes: 6 additions & 3 deletions net/ipv6/udp.c
Original file line number Diff line number Diff line change
Expand Up @@ -843,6 +843,7 @@ int __udp6_lib_rcv(struct sk_buff *skb, struct udp_table *udptable,
struct net *net = dev_net(skb->dev);
struct udphdr *uh;
struct sock *sk;
bool refcounted;
u32 ulen = 0;

if (!pskb_may_pull(skb, sizeof(struct udphdr)))
Expand Down Expand Up @@ -879,7 +880,7 @@ int __udp6_lib_rcv(struct sk_buff *skb, struct udp_table *udptable,
goto csum_error;

/* Check if the socket is already available, e.g. due to early demux */
sk = skb_steal_sock(skb);
sk = skb_steal_sock(skb, &refcounted);
if (sk) {
struct dst_entry *dst = skb_dst(skb);
int ret;
Expand All @@ -888,12 +889,14 @@ int __udp6_lib_rcv(struct sk_buff *skb, struct udp_table *udptable,
udp6_sk_rx_dst_set(sk, dst);

if (!uh->check && !udp_sk(sk)->no_check6_rx) {
sock_put(sk);
if (refcounted)
sock_put(sk);
goto report_csum_error;
}

ret = udp6_unicast_rcv_skb(sk, skb, uh);
sock_put(sk);
if (refcounted)
sock_put(sk);
return ret;
}

Expand Down
3 changes: 3 additions & 0 deletions net/sched/act_bpf.c
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@
#include <linux/bpf.h>

#include <net/netlink.h>
#include <net/sock.h>
#include <net/pkt_sched.h>
#include <net/pkt_cls.h>

Expand Down Expand Up @@ -53,6 +54,8 @@ static int tcf_bpf_act(struct sk_buff *skb, const struct tc_action *act,
bpf_compute_data_pointers(skb);
filter_res = BPF_PROG_RUN(filter, skb);
}
if (skb_sk_is_prefetched(skb) && filter_res != TC_ACT_OK)
skb_orphan(skb);
rcu_read_unlock();

/* A BPF program may overwrite the default action opcode.
Expand Down
25 changes: 24 additions & 1 deletion tools/include/uapi/linux/bpf.h
Original file line number Diff line number Diff line change
Expand Up @@ -2983,6 +2983,28 @@ union bpf_attr {
* **bpf_get_current_cgroup_id**\ ().
* Return
* The id is returned or 0 in case the id could not be retrieved.
*
* int bpf_sk_assign(struct sk_buff *skb, struct bpf_sock *sk, u64 flags)
* Description
* Assign the *sk* to the *skb*. When combined with appropriate
* routing configuration to receive the packet towards the socket,
* will cause *skb* to be delivered to the specified socket.
* Subsequent redirection of *skb* via **bpf_redirect**\ (),
* **bpf_clone_redirect**\ () or other methods outside of BPF may
* interfere with successful delivery to the socket.
*
* This operation is only valid from TC ingress path.
*
* The *flags* argument must be zero.
* Return
* 0 on success, or a negative errno in case of failure.
*
* * **-EINVAL** Unsupported flags specified.
* * **-ENOENT** Socket is unavailable for assignment.
* * **-ENETUNREACH** Socket is unreachable (wrong netns).
* * **-EOPNOTSUPP** Unsupported operation, for example a
* call from outside of TC ingress.
* * **-ESOCKTNOSUPPORT** Socket type not supported (reuseport).
*/
#define __BPF_FUNC_MAPPER(FN) \
FN(unspec), \
Expand Down Expand Up @@ -3108,7 +3130,8 @@ union bpf_attr {
FN(get_ns_current_pid_tgid), \
FN(xdp_output), \
FN(get_netns_cookie), \
FN(get_current_ancestor_cgroup_id),
FN(get_current_ancestor_cgroup_id), \
FN(sk_assign),

/* integer value in 'imm' field of BPF_CALL instruction selects which helper
* function eBPF program intends to call
Expand Down
Loading

0 comments on commit c58b155

Please sign in to comment.