[PATCHv2] nvme-tcp: strict pdu pacing to avoid send stalls on TLS
Hannes Reinecke
hare at suse.de
Thu Apr 18 03:23:16 PDT 2024
On 4/18/24 12:18, Hannes Reinecke wrote:
> TLS requires a strict pdu pacing via MSG_EOR to signal the end
> of a record and subsequent encryption. If we do not set MSG_EOR
> at the end of a sequence the record won't be closed, encryption
> doesn't start, and we end up with a send stall as the message
> will never be passed on to the TCP layer.
> So do not check for the queue status when TLS is enabled but
> rather make the MSG_MORE setting dependent on the current
> request only.
>
> Signed-off-by: Hannes Reinecke <hare at kernel.org>
> ---
> drivers/nvme/host/tcp.c | 9 +++++++--
> 1 file changed, 7 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
> index 2b821cbbdf1f..aef1bb8d2f2b 100644
> --- a/drivers/nvme/host/tcp.c
> +++ b/drivers/nvme/host/tcp.c
> @@ -369,12 +369,17 @@ static inline void nvme_tcp_send_all(struct nvme_tcp_queue *queue)
> } while (ret > 0);
> }
>
> -static inline bool nvme_tcp_queue_more(struct nvme_tcp_queue *queue)
> +static inline bool nvme_tcp_queue_has_pending(struct nvme_tcp_queue *queue)
> {
> return !list_empty(&queue->send_list) ||
> !llist_empty(&queue->req_list);
> }
>
> +static inline bool nvme_tcp_queue_more(struct nvme_tcp_queue *queue)
> +{
> + return !nvme_tcp_tls_enabled(queue) && nvme_tcp_queue_has_pending(queue);
> +}
> +
> static inline void nvme_tcp_queue_request(struct nvme_tcp_request *req,
> bool sync, bool last)
> {
> @@ -395,7 +400,7 @@ static inline void nvme_tcp_queue_request(struct nvme_tcp_request *req,
> mutex_unlock(&queue->send_mutex);
> }
>
> - if (last && nvme_tcp_queue_more(queue))
> + if (last && nvme_tcp_queue_has_pending(queue))
> queue_work_on(queue->io_cpu, nvme_tcp_wq, &queue->io_work);
> }
>
Bah. Missed the preliminary patch. Forget this one, I'll resend.
Cheers,
Hannes
More information about the Linux-nvme
mailing list