rxrpc kernel sockets hold additional reference to dst
Vadim Fedorenko
vfedorenko at novek.ru
Thu Jan 28 06:21:04 EST 2021
On 28.01.2021 10:21, David Howells wrote:
> Vadim Fedorenko <vfedorenko at novek.ru> wrote:
>
>> @@ -833,10 +842,16 @@ static void rxrpc_sock_destructor(struct sock *sk)
>> _enter("%p", sk);
>>
>> rxrpc_purge_queue(&sk->sk_receive_queue);
>> + dst_release(sk->sk_rx_dst);
>
> Um... sk_rx_dst isn't used by rxrpc. It's not a superclass of a UDP socket,
> but rather points to one. Putting a print statement on this shows that it's
> NULL at this point.
>
I think syzkaller uses some uncommon path of packet to traverse to rxrpc socket.
So, sk->sk_rx_dst is set only when skb_steal_sk() successfully steals the sock
assosiated with skb. Right now I'm not sure how can we get a skb->sk set for the
first incoming packet. syzkaller reproduce code uses broadcast packets delivered
via tun interface. Maybe be problem in this path? I have a stack dump of
incoming packet from rxrpc_input_packet:
[ 246.595706] CPU: 2 PID: 13017 Comm: repro_unregiste Not tainted
5.10.9-2.el7.x86_64 #1
[ 246.595728] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011
[ 246.595759] Call Trace:
[ 246.595829] dump_stack+0x107/0x163
[ 246.595896] rxrpc_input_packet+0xfb0/0x43b0
[ 246.595943] ? find_held_lock+0x2d/0x110
[ 246.596005] ? rxrpc_extract_header+0x680/0x680
[ 246.596037] ? netlink_has_listeners+0x2ac/0x430
[ 246.596078] ? lock_downgrade+0x680/0x680
[ 246.596129] ? rcu_read_lock_held+0xaa/0xc0
[ 246.596215] ? rxrpc_extract_header+0x680/0x680
[ 246.596248] udp_queue_rcv_one_skb+0xc6f/0x1a30
[ 246.596315] udp_queue_rcv_skb+0x128/0x810
[ 246.596381] udp_unicast_rcv_skb+0xb9/0x360
[ 246.596443] __udp4_lib_rcv+0x78e/0x3290
[ 246.596563] ? udp_err+0x30/0x30
[ 246.596612] ? rcu_read_lock_held+0xaa/0xc0
[ 246.596657] ? rcu_read_lock_sched_held+0xe0/0xe0
[ 246.596732] ip_protocol_deliver_rcu+0x6c/0x910
[ 246.596805] ip_local_deliver_finish+0x240/0x3b0
[ 246.596862] ip_local_deliver+0x1cd/0x540
[ 246.596901] ? ip_local_deliver_finish+0x3b0/0x3b0
[ 246.596967] ? ip_protocol_deliver_rcu+0x910/0x910
[ 246.597002] ? ip_rcv_finish_core.isra.0+0x60a/0x1f70
[ 246.597080] ip_rcv_finish+0x1da/0x2f0
[ 246.597131] ip_rcv+0xcc/0x410
[ 246.597170] ? ip_local_deliver+0x540/0x540
[ 246.597235] ? ip_rcv_finish_core.isra.0+0x1f70/0x1f70
[ 246.597302] ? ip_local_deliver+0x540/0x540
[ 246.597353] __netif_receive_skb_one_core+0x114/0x180
[ 246.597394] ? __netif_receive_skb_core+0x3b60/0x3b60
[ 246.597513] __netif_receive_skb+0x27/0x1c0
[ 246.597563] netif_receive_skb+0x178/0x980
[ 246.597603] ? __netif_receive_skb+0x1c0/0x1c0
[ 246.597668] ? rcu_read_lock_sched_held+0xaa/0xe0
[ 246.597733] tun_rx_batched+0x5c4/0x7d0
[ 246.597790] ? tun_flow_cleanup+0x2a0/0x2a0
[ 246.597830] ? lock_release+0x660/0x660
[ 246.597866] ? tun_get_user+0x2c55/0x3fc0
[ 246.597906] ? lock_downgrade+0x680/0x680
[ 246.597975] ? __local_bh_enable_ip+0x9c/0x120
[ 246.598031] tun_get_user+0x148c/0x3fc0
[ 246.598090] ? rcu_read_lock_bh_held+0xc0/0xc0
[ 246.598115] ? tun_build_skb+0x1080/0x1080
[ 246.598150] ? tun_do_read+0x4e0/0x1cc0
[ 246.598184] ? rcu_read_lock_held+0xaa/0xc0
I will try to find out which one sets skb->sk in this flow, because it's not the
right flow for rxrpc sockets probably
Vadim
More information about the linux-afs
mailing list