[PATCH V3] nvme-pci: allow unmanaged interrupts
Ming Lei
ming.lei at redhat.com
Tue Jul 2 03:41:12 PDT 2024
From: Keith Busch <kbusch at kernel.org>
People _really_ want to control their interrupt affinity in some
cases, such as Openshift with Performance profile, in which each
irq's affinity is completely specified from userspace. Turns out
that 'isolcpus=managed_irqs' isn't enough.
Add module parameter to allow unmanaged interrupts, just as some
SCSI drivers are doing.
Cc: Marcelo Tosatti <mtosatti at redhat.com>
Signed-off-by: Keith Busch <kbusch at kernel.org>
Signed-off-by: Ming Lei <ming.lei at redhat.com>
---
v2->v3:
- rebase on for-next
- add openshift use case
v1->v2: skip the the AFFINITY vector allocation if the parameter is
provided instead trying to make the vector code handle all post_vectors.
drivers/nvme/host/pci.c | 18 ++++++++++++++++--
1 file changed, 16 insertions(+), 2 deletions(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 5d8035218de9..a39c99c9b64d 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -63,6 +63,11 @@ MODULE_PARM_DESC(sgl_threshold,
"Use SGLs when average request segment size is larger or equal to "
"this size. Use 0 to disable SGLs.");
+static bool managed_irqs = true;
+module_param(managed_irqs, bool, 0444);
+MODULE_PARM_DESC(managed_irqs,
+ "set to false for user controlled irq affinity");
+
#define NVME_PCI_MIN_QUEUE_SIZE 2
#define NVME_PCI_MAX_QUEUE_SIZE 4095
static int io_queue_depth_set(const char *val, const struct kernel_param *kp);
@@ -456,7 +461,7 @@ static void nvme_pci_map_queues(struct blk_mq_tag_set *set)
* affinity), so use the regular blk-mq cpu mapping
*/
map->queue_offset = qoff;
- if (i != HCTX_TYPE_POLL && offset)
+ if (managed_irqs && i != HCTX_TYPE_POLL && offset)
blk_mq_pci_map_queues(map, to_pci_dev(dev->dev), offset);
else
blk_mq_map_queues(map);
@@ -2226,6 +2231,7 @@ static int nvme_setup_irqs(struct nvme_dev *dev, unsigned int nr_io_queues)
};
unsigned int irq_queues, poll_queues;
unsigned int flags = PCI_IRQ_ALL_TYPES | PCI_IRQ_AFFINITY;
+ int ret;
/*
* Poll queues don't need interrupts, but we need at least one I/O queue
@@ -2251,8 +2257,16 @@ static int nvme_setup_irqs(struct nvme_dev *dev, unsigned int nr_io_queues)
irq_queues += (nr_io_queues - poll_queues);
if (dev->ctrl.quirks & NVME_QUIRK_BROKEN_MSI)
flags &= ~PCI_IRQ_MSI;
- return pci_alloc_irq_vectors_affinity(pdev, 1, irq_queues, flags,
+
+ if (managed_irqs)
+ return pci_alloc_irq_vectors_affinity(pdev, 1, irq_queues, flags,
&affd);
+
+ flags &= ~PCI_IRQ_AFFINITY;
+ ret = pci_alloc_irq_vectors(pdev, 1, irq_queues, flags);
+ if (ret > 0)
+ nvme_calc_irq_sets(&affd, ret - 1);
+ return ret;
}
static unsigned int nvme_max_io_queues(struct nvme_dev *dev)
--
2.44.0
More information about the Linux-nvme
mailing list