[PATCH v4 2/5] block: wire-up support for passthrough plugging
Ming Lei
ming.lei at redhat.com
Thu May 5 07:21:15 PDT 2022
On Thu, May 05, 2022 at 11:36:13AM +0530, Kanchan Joshi wrote:
> From: Jens Axboe <axboe at kernel.dk>
>
> Add support for plugging in passthrough path. When plugging is enabled, the
> requests are added to a plug instead of getting dispatched to the driver.
> And when the plug is finished, the whole batch gets dispatched via
> ->queue_rqs which turns out to be more efficient. Otherwise dispatching
> used to happen via ->queue_rq, one request at a time.
>
> Signed-off-by: Jens Axboe <axboe at kernel.dk>
> Reviewed-by: Christoph Hellwig <hch at lst.de>
> ---
> block/blk-mq.c | 73 +++++++++++++++++++++++++++-----------------------
> 1 file changed, 39 insertions(+), 34 deletions(-)
>
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index 84d749511f55..2cf011b57cf9 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -2340,6 +2340,40 @@ void __blk_mq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
> blk_mq_hctx_mark_pending(hctx, ctx);
> }
>
> +/*
> + * Allow 2x BLK_MAX_REQUEST_COUNT requests on plug queue for multiple
> + * queues. This is important for md arrays to benefit from merging
> + * requests.
> + */
> +static inline unsigned short blk_plug_max_rq_count(struct blk_plug *plug)
> +{
> + if (plug->multiple_queues)
> + return BLK_MAX_REQUEST_COUNT * 2;
> + return BLK_MAX_REQUEST_COUNT;
> +}
> +
> +static void blk_add_rq_to_plug(struct blk_plug *plug, struct request *rq)
> +{
> + struct request *last = rq_list_peek(&plug->mq_list);
> +
> + if (!plug->rq_count) {
> + trace_block_plug(rq->q);
> + } else if (plug->rq_count >= blk_plug_max_rq_count(plug) ||
> + (!blk_queue_nomerges(rq->q) &&
> + blk_rq_bytes(last) >= BLK_PLUG_FLUSH_SIZE)) {
> + blk_mq_flush_plug_list(plug, false);
> + trace_block_plug(rq->q);
> + }
> +
> + if (!plug->multiple_queues && last && last->q != rq->q)
> + plug->multiple_queues = true;
> + if (!plug->has_elevator && (rq->rq_flags & RQF_ELV))
> + plug->has_elevator = true;
> + rq->rq_next = NULL;
> + rq_list_add(&plug->mq_list, rq);
> + plug->rq_count++;
> +}
> +
> /**
> * blk_mq_request_bypass_insert - Insert a request at dispatch list.
> * @rq: Pointer to request to be inserted.
> @@ -2353,7 +2387,12 @@ void blk_mq_request_bypass_insert(struct request *rq, bool at_head,
> bool run_queue)
> {
> struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
> + struct blk_plug *plug = current->plug;
>
> + if (plug) {
> + blk_add_rq_to_plug(plug, rq);
> + return;
> + }
This way looks a bit fragile.
blk_mq_request_bypass_insert() is called for dispatching io request too,
such as blk_insert_cloned_request(), then the request may be inserted to
scheduler finally from blk_mq_flush_plug_list().
Another issue in blk_execute_rq(), the request may stay in plug list
before polling, then hang forever.
Just wondering why not adding the pt request to plug in blk_execute_rq_nowait()
explicitly?
Thanks,
Ming
More information about the Linux-nvme
mailing list