[PATCH v2] nvme/tcp: Add support to set the tcp worker cpu affinity

Li Feng lifeng1519 at gmail.com
Mon Apr 17 00:50:46 PDT 2023



> 2023年4月17日 下午3:37,Ming Lei <ming.lei at redhat.com> 写道:
> 
> On Thu, Apr 13, 2023 at 09:29:41PM +0800, Li Feng wrote:
>> The default worker affinity policy is using all online cpus, e.g. from 0
>> to N-1. However, some cpus are busy for other jobs, then the nvme-tcp will
>> have a bad performance.
> 
> Can you explain in detail how nvme-tcp performs worse in this situation?
> 
> If some of CPUs are knows as busy, you can submit the nvme-tcp io jobs
> on other non-busy CPUs via taskset, or scheduler is supposed to choose
> proper CPUs for you. And usually nvme-tcp device should be saturated
> with limited io depth or jobs/cpus.
> 
> 
> Thanks, 
> Ming
> 

Taskset can’t work on nvme-tcp io-queues, because the worker cpu has decided at the nvme-tcp ‘connect’ stage,
not the sending io stage. Assume there is only one io-queue, the binding cpu is CPU0, no matter io jobs
run other cpus.





More information about the Linux-nvme mailing list