[PATCH blk-next 1/2] blk-mq-rdma: Delete not-used multi-queue RDMA map queue code
Sagi Grimberg
sagi at grimberg.me
Fri Oct 2 16:20:35 EDT 2020
>> Yes, basically usage of managed affinity caused people to report
>> regressions not being able to change irq affinity from procfs.
>
> Well, why would they change it? The whole point of the infrastructure
> is that there is a single sane affinity setting for a given setup. Now
> that setting needed some refinement from the original series (e.g. the
> current series about only using housekeeping cpus if cpu isolation is
> in use). But allowing random users to modify affinity is just a receipe
> for a trainwreck.
Well allowing people to mangle irq affinity settings seem to be a hard
requirement from the discussions in the past.
> So I think we need to bring this back ASAP, as doing affinity right
> out of the box is an absolute requirement for sane performance without
> all the benchmarketing deep magic.
Well, it's hard to say that setting custom irq affinity settings is
deemed non-useful to anyone and hence should be prevented. I'd expect
that irq settings have a sane default that works and if someone wants to
change it, it can but there should be no guarantees on optimal
performance. But IIRC this had some dependencies on drivers and some
more infrastructure to handle dynamic changes...
More information about the Linux-nvme
mailing list