[PATCH v2] nvme/tcp: Add support to set the tcp worker cpu affinity
Li Feng
fengli at smartx.com
Sun Apr 16 20:31:08 PDT 2023
> 2023年4月16日 上午5:06,David Laight <David.Laight at ACULAB.COM> 写道:
>
> From: Li Feng
>> Sent: 14 April 2023 10:35
>>>
>>> On 4/13/23 15:29, Li Feng wrote:
>>>> The default worker affinity policy is using all online cpus, e.g. from 0
>>>> to N-1. However, some cpus are busy for other jobs, then the nvme-tcp will
>>>> have a bad performance.
>>>>
>>>> This patch adds a module parameter to set the cpu affinity for the nvme-tcp
>>>> socket worker threads. The parameter is a comma separated list of CPU
>>>> numbers. The list is parsed and the resulting cpumask is used to set the
>>>> affinity of the socket worker threads. If the list is empty or the
>>>> parsing fails, the default affinity is used.
>>>>
> ...
>>> I am not in favour of this.
>>> NVMe-over-Fabrics has _virtual_ queues, which really have no
>>> relationship to the underlying hardware.
>>> So trying to be clever here by tacking queues to CPUs sort of works if
>>> you have one subsystem to talk to, but if you have several where each
>>> exposes a _different_ number of queues you end up with a quite
>>> suboptimal setting (ie you rely on the resulting cpu sets to overlap,
>>> but there is no guarantee that they do).
>>
>> Thanks for your comment.
>> The current io-queues/cpu map method is not optimal.
>> It is stupid, and just starts from 0 to the last CPU, which is not configurable.
>
> Module parameters suck, and passing the buck to the user
> when you can't decide how to do something isn't a good idea either.
>
> If the system is busy pinning threads to cpus is very hard to
> get right.
>
> It can be better to set the threads to run at the lowest RT
> priority - so they have priority over all 'normal' threads
> and also have a very sticky (but not fixed) cpu affinity so
> that all such threads tends to get spread out by the scheduler.
> This all works best if the number of RT threads isn't greater
> than the number of physical cpu.
>
> David
>
> -
> Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
> Registration No: 1397386 (Wales)
Hi David,
RT priority can’t solve the cross numa access issue.
If the user doesn’t know how to configure this affinity, just keep it default.
Cross numa is not a obvious issue on X86{_64}, but it’s a significant issue
on aarch64 with multiple numa nodes.
Thanks.
More information about the Linux-nvme
mailing list