[PATCH v24 01/20] net: Introduce direct data placement tcp offload

Sagi Grimberg sagi at grimberg.me
Tue Apr 30 04:54:13 PDT 2024



On 29/04/2024 14:35, Aurelien Aptel wrote:
> Sagi Grimberg <sagi at grimberg.me> writes:
>> This is not simply a steering rule that can be overwritten at any point?
> No, unlike steering rules, the offload resources cannot be moved to a
> different queue.
>
> In order to move it we will need to re-create the queue and the
> resources assigned to it.  We will consider to improve the HW/FW/SW to
> allow this in future versions.

Well, you cannot rely on the fact that the application will be pinned to a
specific cpu core. That may be the case by accident, but you must not and
cannot assume it.

Even today, nvme-tcp has an option to run from an unbound wq context,
where queue->io_cpu is set to WORK_CPU_UNBOUND. What are you going
to do there?

nvme-tcp may handle rx side directly from .data_ready() in the future, what
will the offload do in that case?

>
>> I was simply referring to the fact that you set config->io_cpu from
>> sk->sk_incoming_cpu
>> and then you pass sk (and config) to .sk_add, so why does this
>> assignment need to
>> exist here and not below the interface down at the driver?
> You're correct, it doesn't need to exist *if* we use sk->incoming_cpu,
> which at the time it is used, is the wrong value.
> The right value for cfg->io_cpu is nvme_queue->io_cpu.

io_cpu may or may not mean anything. You cannot rely on it, nor dictate it.

>
> So either:
> - we do that and thus keep cfg->io_cpu.
> - or we remove cfg->io_cpu, and we offload the socket from
>    nvme_tcp_io_work() where the io_cpu is implicitly going to be
>    the current CPU.
What do you mean offload the socket from nvme_tcp_io_work? I do not
understand what this means.




More information about the Linux-nvme mailing list