[PATCH] nvme-tcp: fix a segmentation fault during io parsing error
Grupi, Elad
Elad.Grupi at dell.com
Thu Mar 18 08:31:51 GMT 2021
Patch is ready in a new thread
http://lists.infradead.org/pipermail/linux-nvme/2021-March/023824.html
Elad
-----Original Message-----
From: Grupi, Elad
Sent: Tuesday, 16 March 2021 17:46
To: Sagi Grimberg; linux-nvme at lists.infradead.org
Subject: RE: [PATCH] nvme-tcp: fix a segmentation fault during io parsing error
Right. I will address the comment below and send new patch
-----Original Message-----
From: Sagi Grimberg <sagi at grimberg.me>
Sent: Tuesday, 16 March 2021 8:21
To: Grupi, Elad; linux-nvme at lists.infradead.org
Subject: Re: [PATCH] nvme-tcp: fix a segmentation fault during io parsing error
[EXTERNAL EMAIL]
> From: Elad Grupi <elad.grupi at dell.com>
>
> In case there is an io that contains inline data and it goes to
> parsing error flow, command response will free command and iov
> before clearing the data on the socket buffer.
> This will delay the command response until receive flow is completed.
>
> Signed-off-by: Elad Grupi <elad.grupi at dell.com>
Hey Elad,
I just realized that this patch was left unaddressed.
> ---
> drivers/nvme/target/tcp.c | 7 ++++++-
> 1 file changed, 6 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
> index d535080b781f..dea94da4c9ba 100644
> --- a/drivers/nvme/target/tcp.c
> +++ b/drivers/nvme/target/tcp.c
> @@ -146,6 +146,7 @@ static struct workqueue_struct *nvmet_tcp_wq;
> static struct nvmet_fabrics_ops nvmet_tcp_ops;
> static void nvmet_tcp_free_cmd(struct nvmet_tcp_cmd *c);
> static void nvmet_tcp_finish_cmd(struct nvmet_tcp_cmd *cmd);
> +static void nvmet_tcp_queue_response(struct nvmet_req *req);
>
> static inline u16 nvmet_tcp_cmd_tag(struct nvmet_tcp_queue *queue,
> struct nvmet_tcp_cmd *cmd)
> @@ -476,7 +477,11 @@ static struct nvmet_tcp_cmd *nvmet_tcp_fetch_cmd(struct nvmet_tcp_queue *queue)
> nvmet_setup_c2h_data_pdu(queue->snd_cmd);
> else if (nvmet_tcp_need_data_in(queue->snd_cmd))
> nvmet_setup_r2t_pdu(queue->snd_cmd);
> - else
> + else if (nvmet_tcp_has_data_in(queue->snd_cmd) &&
> + nvmet_tcp_has_inline_data(queue->snd_cmd)) {
> + nvmet_tcp_queue_response(&queue->snd_cmd->req);
> + queue->snd_cmd = NULL;
Perhaps instead of rotating the command on the list, maybe instead don't queue it in queue_response but rather only when you complete reading the garbage?
Something like the following:
--
@@ -537,6 +537,12 @@ static void nvmet_tcp_queue_response(struct nvmet_req *req)
container_of(req, struct nvmet_tcp_cmd, req);
struct nvmet_tcp_queue *queue = cmd->queue;
+ if (unlikely((cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
+ nvmet_tcp_has_inline_data(cmd))) {
+ /* fail the cmd when we finish processing the inline data */
+ return;
+ }
+
llist_add(&cmd->lentry, &queue->resp_list);
queue_work_on(queue_cpu(queue), nvmet_tcp_wq, &cmd->queue->io_work);
}
@@ -1115,9 +1121,11 @@ static int nvmet_tcp_try_recv_data(struct nvmet_tcp_queue *queue)
}
nvmet_tcp_unmap_pdu_iovec(cmd);
- if (!(cmd->flags & NVMET_TCP_F_INIT_FAILED) &&
- cmd->rbytes_done == cmd->req.transfer_len) {
- cmd->req.execute(&cmd->req);
+ if (cmd->rbytes_done == cmd->req.transfer_len) {
+ if (cmd->flags & NVMET_TCP_F_INIT_FAILED)
+ nvmet_tcp_queue_response(&cmd->req);
+ else
+ cmd->req.execute(&cmd->req);
}
nvmet_prepare_receive_pdu(queue);
--
More information about the Linux-nvme
mailing list