[PATCH 06/18] nvme-tcp: call 'queue->data_ready()' in nvme_tcp_data_ready()

Sagi Grimberg sagi at grimberg.me
Tue Mar 21 06:44:10 PDT 2023


> Call the original data_ready() callback in nvme_tcp_data_ready()
> to avoid a receive stall.

Can you please improve the description to include what is the stall?
For example, does the stall exist today? If it is, I would like to
separate such patches from this set and include them asap.

> 
> Signed-off-by: Hannes Reinecke <hare at suse.de>
> ---
>   drivers/nvme/host/tcp.c | 8 +++++---
>   1 file changed, 5 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
> index 0e14b1b90855..0512eb289dcf 100644
> --- a/drivers/nvme/host/tcp.c
> +++ b/drivers/nvme/host/tcp.c
> @@ -936,12 +936,14 @@ static void nvme_tcp_data_ready(struct sock *sk)
>   
>   	trace_sk_data_ready(sk);
>   
> -	read_lock_bh(&sk->sk_callback_lock);
> -	queue = sk->sk_user_data;
> +	rcu_read_lock_bh();

Now I understand your comment from a previous patch.
Can you explain why is this convention needed?

I would prefer to have it as a separate patch with an
explanation to why it is needed.

> +	queue = rcu_dereference_sk_user_data(sk);
> +	if (queue->data_ready)
> +		queue->data_ready(sk);

Is the tls data_ready call sync or async? just for general knowledge?


>   	if (likely(queue && queue->rd_enabled) &&
>   	    !test_bit(NVME_TCP_Q_POLLING, &queue->flags))
>   		queue_work_on(queue->io_cpu, nvme_tcp_wq, &queue->io_work);
> -	read_unlock_bh(&sk->sk_callback_lock);
> +	rcu_read_unlock_bh();
>   }
>   
>   static void nvme_tcp_write_space(struct sock *sk)



More information about the Linux-nvme mailing list