[PATCHv2] nvme-pci: allow unmanaged interrupts
Keith Busch
kbusch at meta.com
Fri May 10 10:46:45 PDT 2024
From: Keith Busch <kbusch at kernel.org>
Some people _really_ want to control their interrupt affinity,
preferring to sacrafice storage performance for scheduling
predicatability on some other subset of CPUs.
Signed-off-by: Keith Busch <kbusch at kernel.org>
---
Sorry for the rapid fire v2, and I know some are still aginst this; I'm
just getting v2 out because v1 breaks a different use case.
And as far as acceptance goes, this doesn't look like it carries any
longterm maintenance overhead. It's an opt-in feature, and you're own
your own if you turn it on.
v1->v2: skip the the AFFINITY vector allocation if the parameter is
provided instead trying to make the vector code handle all post_vectors.
drivers/nvme/host/pci.c | 17 +++++++++++++++--
1 file changed, 15 insertions(+), 2 deletions(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 8e0bb9692685d..def1a295284bb 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -63,6 +63,11 @@ MODULE_PARM_DESC(sgl_threshold,
"Use SGLs when average request segment size is larger or equal to "
"this size. Use 0 to disable SGLs.");
+static bool managed_irqs = true;
+module_param(managed_irqs, bool, 0444);
+MODULE_PARM_DESC(managed_irqs,
+ "set to false for user controlled irq affinity");
+
#define NVME_PCI_MIN_QUEUE_SIZE 2
#define NVME_PCI_MAX_QUEUE_SIZE 4095
static int io_queue_depth_set(const char *val, const struct kernel_param *kp);
@@ -456,7 +461,7 @@ static void nvme_pci_map_queues(struct blk_mq_tag_set *set)
* affinity), so use the regular blk-mq cpu mapping
*/
map->queue_offset = qoff;
- if (i != HCTX_TYPE_POLL && offset)
+ if (managed_irqs && i != HCTX_TYPE_POLL && offset)
blk_mq_pci_map_queues(map, to_pci_dev(dev->dev), offset);
else
blk_mq_map_queues(map);
@@ -2218,6 +2223,7 @@ static int nvme_setup_irqs(struct nvme_dev *dev, unsigned int nr_io_queues)
.priv = dev,
};
unsigned int irq_queues, poll_queues;
+ int ret;
/*
* Poll queues don't need interrupts, but we need at least one I/O queue
@@ -2241,8 +2247,15 @@ static int nvme_setup_irqs(struct nvme_dev *dev, unsigned int nr_io_queues)
irq_queues = 1;
if (!(dev->ctrl.quirks & NVME_QUIRK_SINGLE_VECTOR))
irq_queues += (nr_io_queues - poll_queues);
- return pci_alloc_irq_vectors_affinity(pdev, 1, irq_queues,
+
+ if (managed_irqs)
+ return pci_alloc_irq_vectors_affinity(pdev, 1, irq_queues,
PCI_IRQ_ALL_TYPES | PCI_IRQ_AFFINITY, &affd);
+
+ ret = pci_alloc_irq_vectors(pdev, 1, irq_queues, PCI_IRQ_ALL_TYPES);
+ if (ret > 0)
+ nvme_calc_irq_sets(&affd, ret - 1);
+ return ret;
}
static unsigned int nvme_max_io_queues(struct nvme_dev *dev)
--
2.43.0
More information about the Linux-nvme
mailing list