[PATCH 2/4] nvme-tcp: align I/O cpu with blk-mq mapping

Sagi Grimberg sagi at grimberg.me
Wed Jul 3 07:19:39 PDT 2024



On 03/07/2024 16:50, Hannes Reinecke wrote:
> When 'wq_unbound' is selected we should select the
> the first CPU from a given blk-mq hctx mapping to queue
> the tcp workqueue item. With this we can instruct the
> workqueue code to keep the I/O affinity and avoid
> a performance penalty.

wq_unbound is designed to keep io_cpu to be UNBOUND, my recollection
was the the person introducing it was trying to make the io_cpu always be
on a specific NUMA node, or a subset of cpus within a numa node. So he uses
that and tinkers with wq cpumask via sysfs.

I don't see why you are tying this to wq_unbound in the first place.

>
> One should switch to 'cpu' workqueue affinity to
> get full advantage of this by issuing:
>
> echo cpu > /sys/devices/virtual/workqueue/nvme_tcp_wq_*/affinity_scope

Quantify improvement please.



More information about the Linux-nvme mailing list