[PATCH v20 06/20] nvme-tcp: Add DDP data-path

Aurelien Aptel aaptel at nvidia.com
Wed Nov 29 05:55:29 PST 2023


Sagi Grimberg <sagi at grimberg.me> writes:
>> +static void nvme_tcp_complete_request(struct request *rq,
>> +                                   __le16 status,
>> +                                   union nvme_result result,
>> +                                   __u16 command_id)
>> +{
>> +#ifdef CONFIG_ULP_DDP
>> +     struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq);
>> +
>> +     if (req->offloaded) {
>> +             req->ddp_status = status;
>
> unless this is really a ddp_status, don't name it as such. afiact
> it is the nvme status, so lets stay consistent with the naming.
>
> btw, for making the code simpler we can promote the request
> status/result capture out of CONFIG_ULP_DDP to the general logic
> and then I think the code will look slightly simpler.
>
> This will be consistent with what we do in nvme-rdma and PI.

Ok, we will rename satuts to nvme_status and move it and result out of
the ifdef.

>> @@ -1283,6 +1378,9 @@ static int nvme_tcp_try_send_cmd_pdu(struct nvme_tcp_request *req)
>>       else
>>               msg.msg_flags |= MSG_EOR;
>>
>> +     if (test_bit(NVME_TCP_Q_OFF_DDP, &queue->flags))
>> +             nvme_tcp_setup_ddp(queue, blk_mq_rq_from_pdu(req));
>> +
>
> We keep coming back to this. Why isn't setup done at setup time?

Sorry, this is a left-over from previous tests, we will move it as we
agreed last time [1].

1: https://lore.kernel.org/all/ef66595c-95cd-94c4-7f51-d3d7683a188a@grimberg.me/



More information about the Linux-nvme mailing list