[PATCH] nvme-tcp: strict pdu pacing to avoid send stalls on TLS
Hannes Reinecke
hare at suse.de
Thu Apr 18 02:05:54 PDT 2024
On 4/18/24 10:01, Sagi Grimberg wrote:
>
>
> On 17/04/2024 18:39, Hannes Reinecke wrote:
>> TLS requires a strict pdu pacing via MSG_EOR to signal the end
>> of a record and subsequent encryption. If we do not set MSG_EOR
>> at the end of a sequence the record won't be closed, encryption
>> doesn't start, and we end up with a send stall as the message
>> will never be passed on to the TCP layer.
>> So do not check for the queue status when figuring out whether
>> MSG_MORE should be set but rather make it dependent on the current
>> command only.
>
> How about making nvme_tcp_queue_more take into account nvme_tcp_tls()?
> so we preserve the behavior without tls.
>
> i.e. something like:
> --
> diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
> index 0ba62fc647b3..bbffc67f8a1e 100644
> --- a/drivers/nvme/host/tcp.c
> +++ b/drivers/nvme/host/tcp.c
> @@ -360,12 +360,18 @@ static inline void nvme_tcp_send_all(struct
> nvme_tcp_queue *queue)
> } while (ret > 0);
> }
>
> -static inline bool nvme_tcp_queue_more(struct nvme_tcp_queue *queue)
> +static inline bool nvme_tcp_queue_has_pending(struct nvme_tcp_queue
> *queue)
> {
> return !list_empty(&queue->send_list) ||
> !llist_empty(&queue->req_list);
> }
>
> +static inline bool nvme_tcp_queue_more(struct nvme_tcp_queue *queue)
> +{
> + return !nvme_tcp_tls(queue->ctrl) &&
> + nvme_tcp_queue_has_pending(queue);
> +}
> +
> static inline void nvme_tcp_queue_request(struct nvme_tcp_request *req,
> bool sync, bool last)
> {
> @@ -386,7 +392,7 @@ static inline void nvme_tcp_queue_request(struct
> nvme_tcp_request *req,
> mutex_unlock(&queue->send_mutex);
> }
>
> - if (last && nvme_tcp_queue_more(queue))
> + if (last && nvme_tcp_queue_has_pending(queue))
> queue_work_on(queue->io_cpu, nvme_tcp_wq,
> &queue->io_work);
> }
> --
>
Would work as well, I guess.
I'll give it a go.
Cheers,
Hannes
More information about the Linux-nvme
mailing list