[PATCHv2] nvme-pci: allow unmanaged interrupts
Ming Lei
ming.lei at redhat.com
Fri May 10 16:47:26 PDT 2024
On Fri, May 10, 2024 at 10:46:45AM -0700, Keith Busch wrote:
> From: Keith Busch <kbusch at kernel.org>
>
> Some people _really_ want to control their interrupt affinity,
> preferring to sacrafice storage performance for scheduling
> predicatability on some other subset of CPUs.
>
> Signed-off-by: Keith Busch <kbusch at kernel.org>
> ---
> Sorry for the rapid fire v2, and I know some are still aginst this; I'm
> just getting v2 out because v1 breaks a different use case.
>
> And as far as acceptance goes, this doesn't look like it carries any
> longterm maintenance overhead. It's an opt-in feature, and you're own
> your own if you turn it on.
>
> v1->v2: skip the the AFFINITY vector allocation if the parameter is
> provided instead trying to make the vector code handle all post_vectors.
>
> drivers/nvme/host/pci.c | 17 +++++++++++++++--
> 1 file changed, 15 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
> index 8e0bb9692685d..def1a295284bb 100644
> --- a/drivers/nvme/host/pci.c
> +++ b/drivers/nvme/host/pci.c
> @@ -63,6 +63,11 @@ MODULE_PARM_DESC(sgl_threshold,
> "Use SGLs when average request segment size is larger or equal to "
> "this size. Use 0 to disable SGLs.");
>
> +static bool managed_irqs = true;
> +module_param(managed_irqs, bool, 0444);
> +MODULE_PARM_DESC(managed_irqs,
> + "set to false for user controlled irq affinity");
> +
> #define NVME_PCI_MIN_QUEUE_SIZE 2
> #define NVME_PCI_MAX_QUEUE_SIZE 4095
> static int io_queue_depth_set(const char *val, const struct kernel_param *kp);
> @@ -456,7 +461,7 @@ static void nvme_pci_map_queues(struct blk_mq_tag_set *set)
> * affinity), so use the regular blk-mq cpu mapping
> */
> map->queue_offset = qoff;
> - if (i != HCTX_TYPE_POLL && offset)
> + if (managed_irqs && i != HCTX_TYPE_POLL && offset)
> blk_mq_pci_map_queues(map, to_pci_dev(dev->dev), offset);
> else
> blk_mq_map_queues(map);
Now the queue mapping is built with nothing from irq affinity which is
setup from userspace, and performance could be pretty bad.
Is there any benefit to use unmanaged irq in this way?
Thanks,
Ming
More information about the Linux-nvme
mailing list