[PATCH] nvmet-tcp: add bounds checks in nvmet_tcp_build_pdu_iovec

Guenter Roeck linux at roeck-us.net
Wed Feb 25 12:06:56 PST 2026


Hi,

On Wed, Jan 28, 2026 at 09:41:07AM +0900, YunJe Shin wrote:
> nvmet_tcp_build_pdu_iovec() could walk past cmd->req.sg when a PDU
> length or offset exceeds sg_cnt and then use bogus sg->length/offset
> values, leading to _copy_to_iter() GPF/KASAN. Guard sg_idx, remaining
> entries, and sg->length/offset before building the bvec.
> 
> Fixes: 872d26a391da ("nvmet-tcp: add NVMe over TCP target driver")
> Signed-off-by: YunJe Shin <ioerts at kookmin.ac.kr>
> Reviewed-by: Sagi Grimberg <sagi at grimberg.me>
> Reviewed-by: Joonkyo Jung <joonkyoj at yonsei.ac.kr>
> ---
>  drivers/nvme/target/tcp.c | 17 +++++++++++++++++
>  1 file changed, 17 insertions(+)
> 
> diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
> index 15416ff0eac4..1a62b405d8e6 100644
> --- a/drivers/nvme/target/tcp.c
> +++ b/drivers/nvme/target/tcp.c
> @@ -349,11 +349,14 @@ static void nvmet_tcp_free_cmd_buffers(struct nvmet_tcp_cmd *cmd)
>  	cmd->req.sg = NULL;
>  }
>  
> +static void nvmet_tcp_fatal_error(struct nvmet_tcp_queue *queue);
> +
>  static void nvmet_tcp_build_pdu_iovec(struct nvmet_tcp_cmd *cmd)
>  {
>  	struct bio_vec *iov = cmd->iov;
>  	struct scatterlist *sg;
>  	u32 length, offset, sg_offset;
> +	unsigned int sg_remaining;
>  	int nr_pages;
>  
>  	length = cmd->pdu_len;
> @@ -361,9 +364,22 @@ static void nvmet_tcp_build_pdu_iovec(struct nvmet_tcp_cmd *cmd)
>  	offset = cmd->rbytes_done;
>  	cmd->sg_idx = offset / PAGE_SIZE;
>  	sg_offset = offset % PAGE_SIZE;
> +	if (!cmd->req.sg_cnt || cmd->sg_idx >= cmd->req.sg_cnt) {
> +		nvmet_tcp_fatal_error(cmd->queue);
> +		return;
> +	}
>  	sg = &cmd->req.sg[cmd->sg_idx];
> +	sg_remaining = cmd->req.sg_cnt - cmd->sg_idx;
>  
>  	while (length) {
> +		if (!sg_remaining) {
> +			nvmet_tcp_fatal_error(cmd->queue);
> +			return;
> +		}
> +		if (!sg->length || sg->length <= sg_offset) {
> +			nvmet_tcp_fatal_error(cmd->queue);
> +			return;
> +		}

An experimental AI agent provided the following review feedback.

 If we return early here (or in the other bounds checks added below),
 cmd->recv_msg.msg_iter is left uninitialized because we skip the
 iov_iter_bvec() call at the end of the function.

 Since nvmet_tcp_build_pdu_iovec() returns void, callers are unaware
 of the failure. For example, in nvmet_tcp_handle_h2c_data_pdu():

	nvmet_tcp_build_pdu_iovec(cmd);
	queue->cmd = cmd;
	queue->rcv_state = NVMET_TCP_RECV_DATA;

 Even though nvmet_tcp_fatal_error() correctly sets rcv_state to
 NVMET_TCP_RECV_ERR, the caller immediately overwrites it with
 NVMET_TCP_RECV_DATA.

 Does this cause the state machine to proceed and attempt to receive
 data using the uninitialized cmd->recv_msg.msg_iter? Should this
 function return an error code so callers can handle the failure?

I do not claim to understand the code, and I did not try to follow
the execution sequence. However, I do see that at least one of the
callers of nvmet_tcp_build_pdu_iovec() does unconditionally overwrite
queue->rcv_state, as indicated above. Please let me know if the AI
feedback is reasonable or if it is missing some context.

Thanks,
Guenter



More information about the Linux-nvme mailing list