[PATCH 1/6] blk-mq: introduce blk_mq_hctx_map_queues
Bjorn Helgaas
helgaas at kernel.org
Fri Sep 13 09:26:54 PDT 2024
On Fri, Sep 13, 2024 at 09:41:59AM +0200, Daniel Wagner wrote:
> From: Ming Lei <ming.lei at redhat.com>
>
> blk_mq_pci_map_queues and blk_mq_virtio_map_queues will create a CPU to
> hardware queue mapping based on affinity information. These two
> function share code which only differs on how the affinity information
> is retrieved. Also there is the hisi_sas which open codes the same loop.
>
> Thus introduce a new helper function for creating these mappings which
> takes an callback function for fetching the affinity mask. Also
> introduce common helper function for PCI and virtio devices to retrieve
> affinity masks.
> diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
> index e3a49f66982d..84f9c16b813b 100644
> --- a/drivers/pci/pci.c
> +++ b/drivers/pci/pci.c
> @@ -6370,6 +6370,26 @@ int pci_set_vga_state(struct pci_dev *dev, bool decode,
> return 0;
> }
>
> +#ifdef CONFIG_BLK_MQ_PCI
> +/**
> + * pci_get_blk_mq_affinity - get affinity mask queue mapping for PCI device
> + * @dev_data: Pointer to struct pci_dev.
> + * @offset: Offset to use for the pci irq vector
> + * @queue: Queue index
> + *
> + * This function returns for a queue the affinity mask for a PCI device.
> + * It is usually used as callback for blk_mq_hctx_map_queues().
> + */
> +const struct cpumask *pci_get_blk_mq_affinity(void *dev_data, int offset,
> + int queue)
> +{
> + struct pci_dev *pdev = dev_data;
> +
> + return pci_irq_get_affinity(pdev, offset + queue);
> +}
> +EXPORT_SYMBOL_GPL(pci_get_blk_mq_affinity);
> +#endif
IMO this doesn't really fit well in drivers/pci since it doesn't add
any PCI-specific knowledge or require any PCI core internals, and the
parameters are blk-specific. I don't object to the code, but it seems
like it could go somewhere in block/?
Bjorn
More information about the Linux-nvme
mailing list