nvme_tcp BUG: unable to handle kernel NULL pointer dereference at 0000000000000230
Engel, Amit
Amit.Engel at Dell.com
Wed Jun 9 04:14:14 PDT 2021
Correct, free_queue is being called (sock->sk becomes NULL) before restore_sock_calls
When restore_sock_calls is called, we fail on 'write_lock_bh(&sock->sk->sk_callback_lock)'
NULL pointer dereference at 0x230 → 560 decimal
crash> struct sock -o
struct sock {
[0] struct sock_common __sk_common;
…
...
…
[560] rwlock_t sk_callback_lock;
stop queue in ctx2 does not really do anything since 'NVME_TCP_Q_LIVE' bit is already cleared (by ctx1).
can you please explain how stop the queue before free helps to serialize ctx1 ?
The race we are describing is based on the panic bt that I shared.
maybe our analysis is not accurate?
Thanks,
Amit
-----Original Message-----
From: Sagi Grimberg <sagi at grimberg.me>
Sent: Wednesday, June 9, 2021 12:11 PM
To: Engel, Amit; linux-nvme at lists.infradead.org
Cc: Anner, Ran; Grupi, Elad
Subject: Re: nvme_tcp BUG: unable to handle kernel NULL pointer dereference at 0000000000000230
[EXTERNAL EMAIL]
> Im not sure that using the queue_lock mutex ill help The race in this
> case is between sock_release and nvme_tcp_restore_sock_calls
> sock_release is being called as part of nvme_tcp_free_queue which is
> destroying the mutex
Maybe I'm not understanding the issue here. What is the scenario again?
stop_queue is called (ctx1), that triggers error_recovery (ctx2) which then calls free_queue before ctx1 gets to restore sock callbacks?
err_work will first stop the queues before freeing them, so it will serialize behind ctx1. What am I missing?
More information about the Linux-nvme
mailing list