[PATCH v5 4/8] blk-mq: introduce blk_mq_map_hw_queues
John Garry
john.g.garry at oracle.com
Thu Nov 21 01:08:02 PST 2024
On 15/11/2024 16:37, Daniel Wagner wrote:
> blk_mq_pci_map_queues and blk_mq_virtio_map_queues will create a CPU to
> hardware queue mapping based on affinity information. These two function
> share common code and only differ on how the affinity information is
> retrieved. Also, those functions are located in the block subsystem
> where it doesn't really fit in. They are virtio and pci subsystem
> specific.
>
> Thus introduce provide a generic mapping function which uses the
> irq_get_affinity callback from bus_type.
>
> Originally idea from Ming Lei <ming.lei at redhat.com>
>
> Reviewed-by: Christoph Hellwig <hch at lst.de>
> Reviewed-by: Hannes Reinecke <hare at suse.de>
> Signed-off-by: Daniel Wagner <wagi at kernel.org>
Just a couple of styling queries/comments, below:
Reviewed-by: John Garry <john.g.garry at oracle.com>
> ---
> block/blk-mq-cpumap.c | 37 +++++++++++++++++++++++++++++++++++++
> include/linux/blk-mq.h | 2 ++
> 2 files changed, 39 insertions(+)
>
> diff --git a/block/blk-mq-cpumap.c b/block/blk-mq-cpumap.c
> index 9638b25fd52124f0173e968ebdca5f1fe0b42ad9..0b65ffa5a183cc8e6697df4a16748eff15bfa8b3 100644
> --- a/block/blk-mq-cpumap.c
> +++ b/block/blk-mq-cpumap.c
> @@ -11,6 +11,7 @@
> #include <linux/smp.h>
> #include <linux/cpu.h>
> #include <linux/group_cpus.h>
> +#include <linux/device/bus.h>
>
> #include "blk.h"
> #include "blk-mq.h"
> @@ -54,3 +55,39 @@ int blk_mq_hw_queue_to_node(struct blk_mq_queue_map *qmap, unsigned int index)
>
> return NUMA_NO_NODE;
> }
> +
> +/**
> + * blk_mq_map_hw_queues - Create CPU to hardware queue mapping
> + * @qmap: CPU to hardware queue map.
> + * @dev: The device to map queues.
> + * @offset: Queue offset to use for the device.
supernit: maybe no '.'
> + *
> + * Create a CPU to hardware queue mapping in @qmap. The struct bus_type
> + * irq_get_affinity callback will be used to retrieve the affinity.
> + */
> +void blk_mq_map_hw_queues(struct blk_mq_queue_map *qmap,
> + struct device *dev, unsigned int offset)
> +
> +{
> + const struct cpumask *mask;
> + unsigned int queue, cpu;
> +
> + if (!dev->bus->irq_get_affinity)
> + goto fallback;
> +
> + for (queue = 0; queue < qmap->nr_queues; queue++) {
> + mask = dev->bus->irq_get_affinity(dev, queue + offset);
> + if (!mask)
> + goto fallback;
> +
> + for_each_cpu(cpu, mask)
> + qmap->mq_map[cpu] = qmap->queue_offset + queue;
> + }
> +
> + return;
> +
> +fallback:
> + WARN_ON_ONCE(qmap->nr_queues > 1);
> + blk_mq_clear_mq_map(qmap);
> +}
> +EXPORT_SYMBOL_GPL(blk_mq_map_hw_queues);
is there still a blank line at the bottom of the file?
> diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
> index 2035fad3131fb60781957095ce8a3a941dd104be..05f544a9ed873d2f96d72c18e124c94146f6943f 100644
> --- a/include/linux/blk-mq.h
> +++ b/include/linux/blk-mq.h
> @@ -923,6 +923,8 @@ void blk_mq_unfreeze_queue_non_owner(struct request_queue *q);
> void blk_freeze_queue_start_non_owner(struct request_queue *q);
>
> void blk_mq_map_queues(struct blk_mq_queue_map *qmap);
> +void blk_mq_map_hw_queues(struct blk_mq_queue_map *qmap,
> + struct device *dev, unsigned int offset);
> void blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, int nr_hw_queues);
>
> void blk_mq_quiesce_queue_nowait(struct request_queue *q);
>
More information about the Linux-nvme
mailing list