[PATCH v10 13/13] docs: add io_queue flag to isolcpus

Ming Lei tom.leiming at gmail.com
Sat Apr 11 05:52:00 PDT 2026


On Fri, Apr 10, 2026 at 03:31:22PM -0400, Aaron Tomlin wrote:
> On Fri, Apr 10, 2026 at 10:44:15AM +0800, Ming Lei wrote:
> > For unmanaged interrupts, user can set irq affinity on housekeeping cpus
> > from /proc or kernel command line.
> > 
> > Why is unmanaged interrupts involved with this patchset?
> 
> Thank you for your continued engagement and for ultimately supporting the
> progression of this series.
> 
> To clarify the handling of unmanaged interrupts, while it is entirely true
> that an administrator could attempt to manually configure "irqaffinity=" or
> via procfs after the fact, this series actively address unmanaged interrupts.
> 
> > > CPUs, thereby breaking isolation. By applying the constraint via io_queue
> > > at the block layer, we restrict the hardware queue count and map the
> > > isolated CPUs to the housekeeping queues, ensuring isolation is maintained
> > > regardless of whether the driver uses managed interrupts.
> > > 
> > > Does the above help?
> > 
> > As I mentioned, managed irq already covers it:
> > 
> > - typically application submits IO from housekeeping CPUs, which is mapped
> >   to one hardware, which effective interrupt affinity excludes isolated
> >   CPUs if possible.
> > 
> > I'd suggest to share some real problems you found instead of something
> > imaginary.
> 
> If we trace how mpi3mr sets up its ISRs, it relies heavily on the core
> grouping logic:
> 
> mpi3mr_setup_isr
> {
>   unsigned int irq_flags = PCI_IRQ_MSIX
> 
>   struct irq_affinity desc = { .pre_vectors =  1, .post_vectors = 1, }
> 
>   pci_alloc_irq_vectors_affinity(mrioc->pdev, min_vec,
>                                  max_vectors, irq_flags, &desc)
>   {
>     if (flags & PCI_IRQ_MSIX) {
>       // affd != NULL
>       __pci_enable_msix_range(dev, NULL, min_vecs, max_vecs, affd, flags)
>       {
> 
>         for (;;) {
> 
>           msix_capability_init(dev, entries, nvec, affd)
>           {
>             msix_setup_interrupts(dev, entries, nvec, affd)
>             {
>               // affd
>               irq_create_affinity_masks(nvec, affd)
>               {
>                 for (i = 0, usedvecs = 0; i < affd->nr_sets; i++) {
>                   unsigned int nr_masks, this_vecs = affd->set_size[i]
>                   struct cpumask *result = group_cpus_evenly(this_vecs,
>                                                              &nr_masks)
>                   if (!result) {
>                     kfree(masks)
>                     return NULL
>                   }
> 
>                   for (int j = 0; j < nr_masks; j++)
>                     cpumask_copy(&masks[curvec + j].mask, &result[j])
>                   kfree(result);
> 
>                   curvec += nr_masks
>                   usedvecs += nr_masks
>                 }
>               }
>             }
>           }
>         }
>       }
>     }
>   }
> }
> 
> The critical issue lies at the invocation of group_cpus_evenly(). Without
> this patchset, the core logic lacks the necessary constraints to respect
> CPU isolation. It is entirely possible, and indeed happens in practice, for
> an isolated CPU to be assigned to a CPU mask group.

It is one bug report? No, because it doesn't show any trouble from user
viewpoint.

Sebastian explains/shows how "isolcpus=managed_irq" works perfectly in the
following link:

https://lore.kernel.org/all/20260401110232.ET5RxZfl@linutronix.de/

You have reviewed it...

What matters is that IO won't interrupt isolated CPU.

> 
> The newer implementation of irq_create_affinity_masks() introduced by this
> series resolves this. It considers the new CPU mask added to the IRQ
> affinity descriptor. When group_mask_cpus_evenly() is called, this mask is
> evaluated [1], guaranteeing that isolated CPUs are entirely excluded from
> the mask groups.
> 
> [1]: https://lore.kernel.org/lkml/20260401222312.772334-8-atomlin@atomlin.com/

Not at all.

isolated CPU is still included in each group's cpu mask, please see patch
9:

https://lore.kernel.org/linux-block/20260401222312.772334-1-atomlin@atomlin.com/T/#m59df0689ef144f5361535ce59c9ed5923d6e21d5



Thanks, 
Ming



More information about the Linux-nvme mailing list