[PATCH v9 09/13] isolation: Introduce io_queue isolcpus type
Waiman Long
longman at redhat.com
Wed Apr 1 12:05:21 PDT 2026
On 4/1/26 8:49 AM, Sebastian Andrzej Siewior wrote:
> On 2026-03-30 18:10:43 [-0400], Aaron Tomlin wrote:
>> From: Daniel Wagner <wagi at kernel.org>
>>
>> Multiqueue drivers spread I/O queues across all CPUs for optimal
>> performance. However, these drivers are not aware of CPU isolation
>> requirements and will distribute queues without considering the isolcpus
>> configuration.
>>
>> Introduce a new isolcpus mask that allows users to define which CPUs
>> should have I/O queues assigned. This is similar to managed_irq, but
>> intended for drivers that do not use the managed IRQ infrastructure
> I set down and documented the behaviour of managed_irq at
> https://lore.kernel.org/all/20260401110232.ET5RxZfl@linutronix.de/
>
> Could we please clarify whether we want to keep it and this
> additionally or if managed_irq could be used instead. This adds another
> bit. If networking folks jump in on managed_irqs, would they need to
> duplicate this with their net sub flag?
Yes, I will very much prefer to reuse an existing HK cpumask like
managed_irqs for this purpose, if possible, rather than adding another
cpumask that we need to manage. Note that we are in the process of
making these housekeeping cpumasks modifiable at run time in the near
future.
Cheers,
Longman
More information about the Linux-nvme
mailing list