[PATCH] nvme/tcp: Add support to set the tcp worker cpu affinity
Sagi Grimberg
sagi at grimberg.me
Mon Apr 17 06:45:37 PDT 2023
Hey Li,
> The default worker affinity policy is using all online cpus, e.g. from 0
> to N-1. However, some cpus are busy for other jobs, then the nvme-tcp will
> have a bad performance.
>
> This patch adds a module parameter to set the cpu affinity for the nvme-tcp
> socket worker threads. The parameter is a comma separated list of CPU
> numbers. The list is parsed and the resulting cpumask is used to set the
> affinity of the socket worker threads. If the list is empty or the
> parsing fails, the default affinity is used.
I can see how this may benefit a specific set of workloads, but I have a
few issues with this.
- This is exposing a user interface for something that is really
internal to the driver.
- This is something that can be misleading and could be tricky to get
right, my concern is that this would only benefit a very niche case.
- If the setting should exist, it should not be global.
- I prefer not to introduce new modparams.
- I'd prefer to find a way to support your use-case without introducing
a config knob for it.
- It is not backed by performance improvements, but more importantly
does not cover any potential regressions in key metrics (bw/iops/lat)
or lack there of.
More information about the Linux-nvme
mailing list