[PATCH v15 05/20] nvme-tcp: Add DDP offload control path
Aurelien Aptel
aaptel at nvidia.com
Mon Sep 18 11:30:30 PDT 2023
Sagi Grimberg <sagi at grimberg.me> writes:
>> +static int nvme_tcp_offload_socket(struct nvme_tcp_queue *queue)
>> +{
>> + struct ulp_ddp_config config = {.type = ULP_DDP_NVME};
>> + int ret;
>> +
>> + config.nvmeotcp.pfv = NVME_TCP_PFV_1_0;
>> + config.nvmeotcp.cpda = 0;
>> + config.nvmeotcp.dgst =
>> + queue->hdr_digest ? NVME_TCP_HDR_DIGEST_ENABLE : 0;
>> + config.nvmeotcp.dgst |=
>> + queue->data_digest ? NVME_TCP_DATA_DIGEST_ENABLE : 0;
>> + config.nvmeotcp.queue_size = queue->ctrl->ctrl.sqsize + 1;
>> + config.nvmeotcp.queue_id = nvme_tcp_queue_id(queue);
>> + config.nvmeotcp.io_cpu = queue->sock->sk->sk_incoming_cpu;
>
> Please hide io_cpu inside the interface. There is no reason for
> the ulp to assign this. btw, is sk_incoming_cpu stable at this
> point?
We will move the assignemnt of io_cpu to the interface.
As you suggested we followed aRFS (and the NVMeTCP target) which uses
sk->sk_incoming_cpu.
More information about the Linux-nvme
mailing list