[PATCH v6 8/9] blk-mq: use hk cpus only when isolcpus=io_queue is enabled
Ming Lei
ming.lei at redhat.com
Thu May 8 19:38:32 PDT 2025
On Thu, Apr 24, 2025 at 08:19:47PM +0200, Daniel Wagner wrote:
> When isolcpus=io_queue is enabled all hardware queues should run on
> the housekeeping CPUs only. Thus ignore the affinity mask provided by
> the driver. Also we can't use blk_mq_map_queues because it will map all
> CPUs to first hctx unless, the CPU is the same as the hctx has the
> affinity set to, e.g. 8 CPUs with isolcpus=io_queue,2-3,6-7 config
>
> queue mapping for /dev/nvme0n1
> hctx0: default 2 3 4 6 7
> hctx1: default 5
> hctx2: default 0
> hctx3: default 1
>
> PCI name is 00:05.0: nvme0n1
> irq 57 affinity 0-1 effective 1 is_managed:0 nvme0q0
> irq 58 affinity 4 effective 4 is_managed:1 nvme0q1
> irq 59 affinity 5 effective 5 is_managed:1 nvme0q2
> irq 60 affinity 0 effective 0 is_managed:1 nvme0q3
> irq 61 affinity 1 effective 1 is_managed:1 nvme0q4
>
> where as with blk_mq_hk_map_queues we get
>
> queue mapping for /dev/nvme0n1
> hctx0: default 2 4
> hctx1: default 3 5
> hctx2: default 0 6
> hctx3: default 1 7
>
> PCI name is 00:05.0: nvme0n1
> irq 56 affinity 0-1 effective 1 is_managed:0 nvme0q0
> irq 61 affinity 4 effective 4 is_managed:1 nvme0q1
> irq 62 affinity 5 effective 5 is_managed:1 nvme0q2
> irq 63 affinity 0 effective 0 is_managed:1 nvme0q3
> irq 64 affinity 1 effective 1 is_managed:1 nvme0q4
>
> Reviewed-by: Christoph Hellwig <hch at lst.de>
> Reviewed-by: Hannes Reinecke <hare at suse.de>
> Signed-off-by: Daniel Wagner <wagi at kernel.org>
> ---
> block/blk-mq-cpumap.c | 69 +++++++++++++++++++++++++++++++++++++++++++++++++--
> 1 file changed, 67 insertions(+), 2 deletions(-)
>
> diff --git a/block/blk-mq-cpumap.c b/block/blk-mq-cpumap.c
> index 6e6b3e989a5676186b5a31296a1b94b7602f1542..2d678d1db2b5196fc2b2ce5678fdb0cb6bad26e0 100644
> --- a/block/blk-mq-cpumap.c
> +++ b/block/blk-mq-cpumap.c
> @@ -22,8 +22,8 @@ static unsigned int blk_mq_num_queues(const struct cpumask *mask,
> {
> unsigned int num;
>
> - if (housekeeping_enabled(HK_TYPE_MANAGED_IRQ))
> - mask = housekeeping_cpumask(HK_TYPE_MANAGED_IRQ);
> + if (housekeeping_enabled(HK_TYPE_IO_QUEUE))
> + mask = housekeeping_cpumask(HK_TYPE_IO_QUEUE);
Here both two can be considered for figuring out nr_hw_queues:
if (housekeeping_enabled(HK_TYPE_IO_QUEUE))
mask = housekeeping_cpumask(HK_TYPE_IO_QUEUE);
else if (housekeeping_enabled(HK_TYPE_MANAGED_IRQ))
mask = housekeeping_cpumask(HK_TYPE_MANAGED_IRQ);
>
> num = cpumask_weight(mask);
> return min_not_zero(num, max_queues);
> @@ -61,11 +61,73 @@ unsigned int blk_mq_num_online_queues(unsigned int max_queues)
> }
> EXPORT_SYMBOL_GPL(blk_mq_num_online_queues);
>
> +/*
> + * blk_mq_map_hk_queues - Create housekeeping CPU to hardware queue mapping
> + * @qmap: CPU to hardware queue map
> + *
> + * Create a housekeeping CPU to hardware queue mapping in @qmap. If the
> + * isolcpus feature is enabled and blk_mq_map_hk_queues returns true,
> + * @qmap contains a valid configuration honoring the io_queue
> + * configuration. If the isolcpus feature is disabled this function
> + * returns false.
> + */
> +static bool blk_mq_map_hk_queues(struct blk_mq_queue_map *qmap)
> +{
> + struct cpumask *hk_masks;
> + cpumask_var_t isol_mask;
> + unsigned int queue, cpu, nr_masks;
> +
> + if (!housekeeping_enabled(HK_TYPE_IO_QUEUE))
> + return false;
It could be more readable to move the above check to the caller.
Thanks,
Ming
More information about the Linux-nvme
mailing list