[PATCH 1/2] blk-mq: not deactivate hctx if the device doesn't use managed irq

Ming Lei ming.lei at redhat.com
Tue Jun 29 00:49:50 PDT 2021


hctx is deactivated when all CPU in hctx->cpumask become offline by
draining all requests originated from this hctx and moving new
allocation to active hctx. This way is for avoiding inflight IO when
the managed irq is shutdown.

Some drivers(nvme fc, rdma, tcp, loop) doesn't use managed irq, so
they needn't to deactivate hctx. Also, they are the only user of
blk_mq_alloc_request_hctx() which is used for connecting io queue.
And their requirement is that the connect request can be submitted
via one specified hctx on which all CPU in its hctx->cpumask may have
become offline.

Address the requirement for nvme fc/rdma/loop, so the reported kernel
panic on the following line in blk_mq_alloc_request_hctx() can be fixed.

	data.ctx = __blk_mq_get_ctx(q, cpu)

Cc: Sagi Grimberg <sagi at grimberg.me>
Cc: Daniel Wagner <dwagner at suse.de>
Cc: Wen Xiong <wenxiong at us.ibm.com>
Cc: John Garry <john.garry at huawei.com>
Signed-off-by: Ming Lei <ming.lei at redhat.com>
---
 block/blk-mq.c         | 6 +++++-
 include/linux/blk-mq.h | 1 +
 2 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index df5dc3b756f5..74632f50d969 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -494,7 +494,7 @@ struct request *blk_mq_alloc_request_hctx(struct request_queue *q,
 	data.hctx = q->queue_hw_ctx[hctx_idx];
 	if (!blk_mq_hw_queue_mapped(data.hctx))
 		goto out_queue_exit;
-	cpu = cpumask_first_and(data.hctx->cpumask, cpu_online_mask);
+	cpu = cpumask_first(data.hctx->cpumask);
 	data.ctx = __blk_mq_get_ctx(q, cpu);
 
 	if (!q->elevator)
@@ -2570,6 +2570,10 @@ static int blk_mq_hctx_notify_offline(unsigned int cpu, struct hlist_node *node)
 	    !blk_mq_last_cpu_in_hctx(cpu, hctx))
 		return 0;
 
+	/* Controller doesn't use managed IRQ, no need to deactivate hctx */
+	if (hctx->flags & BLK_MQ_F_NOT_USE_MANAGED_IRQ)
+		return 0;
+
 	/*
 	 * Prevent new request from being allocated on the current hctx.
 	 *
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index 21140132a30d..600c5dd1a069 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -403,6 +403,7 @@ enum {
 	 */
 	BLK_MQ_F_STACKING	= 1 << 2,
 	BLK_MQ_F_TAG_HCTX_SHARED = 1 << 3,
+	BLK_MQ_F_NOT_USE_MANAGED_IRQ = 1 << 4,
 	BLK_MQ_F_BLOCKING	= 1 << 5,
 	BLK_MQ_F_NO_SCHED	= 1 << 6,
 	BLK_MQ_F_ALLOC_POLICY_START_BIT = 8,
-- 
2.31.1




More information about the Linux-nvme mailing list