[PATCH v15 06/20] nvme-tcp: Add DDP data-path

Sagi Grimberg sagi at grimberg.me
Wed Sep 20 03:11:47 PDT 2023


>> Can you please explain why? sk_incoming_cpu is updated from the network
>> recv path while you are arguing that the timing matters before you even
>> send the pdu. I don't understand why should that matter.
> 
> Sorry, the original answer was misleading.
> The problem is not about the timing but only about which CPU the code is
> running on.  If we move setup_ddp() earlier as you suggested, it can
> result it running on the wrong CPU.

Please define wrong CPU.

> Calling setup_ddp() in nvme_tcp_setup_cmd_pdu() will not guarantee we
> are on running on the queue->io_cpu.
> It's only during nvme_tcp_queue_request() that we either know we are running on
> queue->io_cpu, or dispatch it to run on queue->io_cpu.

But the sk_incmoing_cpu is updated with the cpu that is reading the
socket, so in fact it should converge to the io_cpu - shouldn't it?

Can you please provide a concrete explanation to the performance
degradation?

> As it is only a performance optimization for the non-likely case, we can
> move it to nvme_tcp_setup_cmd_pdu() as you suggested and re-consider in
> the future if it will be needed.

Would still like to understand this case.



More information about the Linux-nvme mailing list