[PATCH 5/6] blk-mq: Fix queue freeze deadlock

Bart Van Assche Bart.VanAssche at sandisk.com
Wed Jan 4 23:33:22 PST 2017


On Wed, 2017-01-04 at 17:41 -0500, Keith Busch wrote:
> +static void blk_mq_abandon_stopped_requests(struct request_queue *q)
> +{
> +	int i;
> +	struct request *rq, *next;
> +	struct blk_mq_hw_ctx *hctx;
> +	LIST_HEAD(rq_list);
> +
> +	blk_mq_sync_queue(q);
> +
> +	spin_lock(&q->requeue_lock);
> +	list_for_each_entry_safe(rq, next, &q->requeue_list, queuelist) {
> +		struct blk_mq_ctx *ctx;
> +
> +		ctx = rq->mq_ctx;
> +		hctx = blk_mq_map_queue(q, ctx->cpu);
> +		if (blk_mq_hctx_stopped(hctx)) {
> +			list_del_init(&rq->queuelist);
> +
> +			spin_lock(&hctx->lock);
> +			list_add_tail(&rq->queuelist, &rq_list);
> +			spin_unlock(&hctx->lock);
> +		}
> +	}
> +
> +	queue_for_each_hw_ctx(q, hctx, i) {
> +		if (!blk_mq_hctx_stopped(hctx))
> +			continue;
> +
> +		flush_busy_ctxs(hctx, &rq_list);
> +
> +		spin_lock(&hctx->lock);
> +		if (!list_empty(&hctx->dispatch))
> +			list_splice_init(&hctx->dispatch, &rq_list);
> +		spin_unlock(&hctx->lock);
> +	}
> +	spin_unlock(&q->requeue_lock);
> +
> +	while (!list_empty(&rq_list)) {
> +		rq = list_first_entry(&rq_list, struct request, queuelist);
> +		list_del_init(&rq->queuelist);
> +		rq->errors = -EAGAIN;
> +		blk_mq_end_request(rq, rq->errors);
> +	}
> +}

Hello Keith,

This patch adds a second code path to the blk-mq core for running queues and
hence will make the blk-mq core harder to maintain. Have you considered to
implement this functionality by introducing a new "fail all requests" flag
for hctx queues such that blk_mq_abandon_stopped_requests() can reuse the
existing mechanism for running a queue?

Bart.



More information about the Linux-nvme mailing list