[PATCH] nvme-tcp: align I/O cpu with blk-mq mapping
Sagi Grimberg
sagi at grimberg.me
Wed Jun 19 01:57:58 PDT 2024
On 19/06/2024 8:30, Christoph Hellwig wrote:
> On Tue, Jun 18, 2024 at 02:03:45PM +0200, Hannes Reinecke wrote:
>> Add a new module parameter 'wq_affinity' to spread the I/O
>> over all cpus within the blk-mq hctx mapping for the queue.
>> This avoids bouncing I/O between cpus when we have less
>> hardware queues than cpus.
> What is the benefit when setting it? What is the downside? Why do you
> think it needs to be conditional?
Not entirely sure.
Hannes,
Can you please show what is the hctx<->cpu mappings before/after this
patch? Please use different queue counts so we can see the pattern.
Can you please show any performance comparisons for low and high
queue counts?
I'm not entirely clear what exactly "spreads I/O over all CPUs" mean.
Every nvme-tcp queue has an io context that does network sends and receives
(unless it goes within .queue_rq() context when possible). Does this
somehow mean
that now a queue io context will run from multiple cpus at the same time?
More information about the Linux-nvme
mailing list