[PATCH v12 08/26] nvme-tcp: Add DDP data-path

Aurelien Aptel aaptel at nvidia.com
Thu Aug 17 06:28:02 PDT 2023


Sagi Grimberg <sagi at grimberg.me> writes:
>>>> @@ -1308,6 +1407,15 @@ static int nvme_tcp_try_send_cmd_pdu(struct nvme_tcp_request *req)
>>>>        else
>>>>                msg.msg_flags |= MSG_EOR;
>>>>
>>>> +     if (test_bit(NVME_TCP_Q_OFF_DDP, &queue->flags)) {
>>>> +             ret = nvme_tcp_setup_ddp(queue, pdu->cmd.common.command_id,
>>>> +                                      blk_mq_rq_from_pdu(req));
>>>> +             WARN_ONCE(ret, "ddp setup failed (queue 0x%x, cid 0x%x, ret=%d)",
>>>> +                       nvme_tcp_queue_id(queue),
>>>> +                       pdu->cmd.common.command_id,
>>>> +                       ret);
>>>> +     }
>>>
>>> Any reason why this is done here when sending the command pdu and not
>>> in setup time?
>>
>> We wish to interact with the HW from the same CPU per queue, hence we
>> are calling setup_ddp() after queue->io_cpu == raw_smp_processor_id()
>> was checked in nvme_tcp_queue_request().
>
> That is very fragile. You cannot depend on this micro-optimization being
> in the code. Is this related to a hidden steering rule you are adding
> to the hw?

We are using a steering rule in order to redirect packets into the
offload engine. This rule also helps with aligning the nvme-tcp
connection with a specific core.

> Which reminds me, in the control patch, you are passing io_cpu, this is
> also a dependency that should be avoided, you should use the same
> mechanism as arfs to learn where the socket is being reaped.

We can use queue->sock->sk->sk_incoming_cpu instead of queue->io_cpu as
it is used in the nvme-tcp target.



More information about the Linux-nvme mailing list