[PATCH] nvme-tcp: proper handling of tcp socket closing flows
Sagi Grimberg
sagi at grimberg.me
Thu Jan 28 18:33:46 EST 2021
> That will work if sk_state_change is called under sk_callback_lock.
It is.
> In addition, need to flush_work(&queue->port->accept_work) in nvmet_tcp_release_queue_work because accept_work is still using queue struct after unlocking sk_callback_lock.
But sk->sk_user_data was not set, so I don't see how the release work
can invoke.
> What about this instead?
> --
> diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c index c41902f7ce39..6388d18ca7c2 100644
> --- a/drivers/nvme/target/tcp.c
> +++ b/drivers/nvme/target/tcp.c
> @@ -1494,16 +1494,28 @@ static int nvmet_tcp_set_queue_sock(struct nvmet_tcp_queue *queue)
> ip_sock_set_tos(sock->sk, inet->rcv_tos);
>
> write_lock_bh(&sock->sk->sk_callback_lock);
> - sock->sk->sk_user_data = queue;
> - queue->data_ready = sock->sk->sk_data_ready;
> - sock->sk->sk_data_ready = nvmet_tcp_data_ready;
> - queue->state_change = sock->sk->sk_state_change;
> - sock->sk->sk_state_change = nvmet_tcp_state_change;
> - queue->write_space = sock->sk->sk_write_space;
> - sock->sk->sk_write_space = nvmet_tcp_write_space;
> + switch (sk->sk_state) {
> + case TCP_FIN_WAIT1:
> + case TCP_CLOSE_WAIT:
> + case TCP_CLOSE:
> + /*
> + * If the socket is already closing, don't even start
> + * consuming it
> + */
> + ret = -ENOTCONN;
> + break;
> + default:
> + sock->sk->sk_user_data = queue;
> + queue->data_ready = sock->sk->sk_data_ready;
> + sock->sk->sk_data_ready = nvmet_tcp_data_ready;
> + queue->state_change = sock->sk->sk_state_change;
> + sock->sk->sk_state_change = nvmet_tcp_state_change;
> + queue->write_space = sock->sk->sk_write_space;
> + sock->sk->sk_write_space = nvmet_tcp_write_space;
> + }
> write_unlock_bh(&sock->sk->sk_callback_lock);
>
> - return 0;
> + return ret;
> }
> --
>
More information about the Linux-nvme
mailing list