[PATCH] nvme-tcp: align I/O cpu with blk-mq mapping
Christoph Hellwig
hch at lst.de
Tue Jun 18 22:30:15 PDT 2024
On Tue, Jun 18, 2024 at 02:03:45PM +0200, Hannes Reinecke wrote:
> Add a new module parameter 'wq_affinity' to spread the I/O
> over all cpus within the blk-mq hctx mapping for the queue.
> This avoids bouncing I/O between cpus when we have less
> hardware queues than cpus.
What is the benefit when setting it? What is the downside? Why do you
think it needs to be conditional?
> + }
> if (wq_unbound)
> queue->io_cpu = WORK_CPU_UNBOUND;
> + else if (wq_affinity) {
Missing curly braces. But this also means the wq_unbound options
is incompatible with your new one.
> + } else
> queue->io_cpu = cpumask_next_wrap(n - 1, cpu_online_mask, -1, false);
Overly long line here.
More information about the Linux-nvme
mailing list