[PATCH v12 08/26] nvme-tcp: Add DDP data-path

Sagi Grimberg sagi at grimberg.me
Mon Aug 14 12:01:14 PDT 2023


>>> @@ -1308,6 +1407,15 @@ static int nvme_tcp_try_send_cmd_pdu(struct nvme_tcp_request *req)
>>>        else
>>>                msg.msg_flags |= MSG_EOR;
>>>
>>> +     if (test_bit(NVME_TCP_Q_OFF_DDP, &queue->flags)) {
>>> +             ret = nvme_tcp_setup_ddp(queue, pdu->cmd.common.command_id,
>>> +                                      blk_mq_rq_from_pdu(req));
>>> +             WARN_ONCE(ret, "ddp setup failed (queue 0x%x, cid 0x%x, ret=%d)",
>>> +                       nvme_tcp_queue_id(queue),
>>> +                       pdu->cmd.common.command_id,
>>> +                       ret);
>>> +     }
>>
>> Any reason why this is done here when sending the command pdu and not
>> in setup time?
> 
> We wish to interact with the HW from the same CPU per queue, hence we
> are calling setup_ddp() after queue->io_cpu == raw_smp_processor_id()
> was checked in nvme_tcp_queue_request().

That is very fragile. You cannot depend on this micro-optimization being
in the code. Is this related to a hidden steering rule you are adding
to the hw?

Which reminds me, in the control patch, you are passing io_cpu, this is
also a dependency that should be avoided, you should use the same 
mechanism as arfs to learn where the socket is being reaped.



More information about the Linux-nvme mailing list