[PATCH 1/3] block: introduce blk_queue_nr_active()
Ewan Milne
emilne at redhat.com
Wed Sep 27 06:50:38 PDT 2023
I think it is unfortunately necessary to compute the sum from all the hctxs
because in the general case there could be threads on other CPUs submitting
I/O through another hctx that might significantly affect the result, but I can
do some more tests and see how it looks.
-Ewan
On Wed, Sep 27, 2023 at 6:56 AM Sagi Grimberg <sagi at grimberg.me> wrote:
>
>
>
> On 9/25/23 19:31, Ewan D. Milne wrote:
> > Returns a count of the total number of active requests
> > in a queue. For non-shared tags (the usual case) this is
> > the sum of nr_active from all of the hctxs.
> >
> > Signed-off-by: Ewan D. Milne <emilne at redhat.com>
> > ---
> > block/blk-mq.h | 5 -----
> > include/linux/blk-mq.h | 33 ++++++++++++++++++++++++++-------
> > 2 files changed, 26 insertions(+), 12 deletions(-)
> >
> > diff --git a/block/blk-mq.h b/block/blk-mq.h
> > index 1743857e0b01..fbc65eefa017 100644
> > --- a/block/blk-mq.h
> > +++ b/block/blk-mq.h
> > @@ -214,11 +214,6 @@ static inline bool blk_mq_tag_is_reserved(struct blk_mq_tags *tags,
> > return tag < tags->nr_reserved_tags;
> > }
> >
> > -static inline bool blk_mq_is_shared_tags(unsigned int flags)
> > -{
> > - return flags & BLK_MQ_F_TAG_HCTX_SHARED;
> > -}
> > -
> > static inline struct blk_mq_tags *blk_mq_tags_from_data(struct blk_mq_alloc_data *data)
> > {
> > if (data->rq_flags & RQF_SCHED_TAGS)
> > diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
> > index 01e8c31db665..c921ae5236ab 100644
> > --- a/include/linux/blk-mq.h
> > +++ b/include/linux/blk-mq.h
> > @@ -716,6 +716,32 @@ int blk_rq_poll(struct request *rq, struct io_comp_batch *iob,
> >
> > bool blk_mq_queue_inflight(struct request_queue *q);
> >
> > +#define queue_for_each_hw_ctx(q, hctx, i) \
> > + xa_for_each(&(q)->hctx_table, (i), (hctx))
> > +
> > +#define hctx_for_each_ctx(hctx, ctx, i) \
> > + for ((i) = 0; (i) < (hctx)->nr_ctx && \
> > + ({ ctx = (hctx)->ctxs[(i)]; 1; }); (i)++)
> > +
> > +static inline bool blk_mq_is_shared_tags(unsigned int flags)
> > +{
> > + return flags & BLK_MQ_F_TAG_HCTX_SHARED;
> > +}
> > +
> > +static inline unsigned int blk_mq_queue_nr_active(struct request_queue *q)
> > +{
> > + unsigned int nr_active = 0;
> > + struct blk_mq_hw_ctx *hctx;
> > + unsigned long i;
> > +
> > + queue_for_each_hw_ctx(q, hctx, i) {
> > + if (unlikely(blk_mq_is_shared_tags(hctx->flags)))
> > + return atomic_read(&q->nr_active_requests_shared_tags);
> > + nr_active += atomic_read(&hctx->nr_active);
> > + }
>
> I think that for the purposes of nvme-mpath you should probably be
> interested in the hctx mapped to the running cpu, and not the
> cumulative active requests.
>
> As for BLK_MQ_F_TAG_HCTX_SHARED, this is dead-code until scsi/null makes
> any use of it... but seems fine in theory.
>
More information about the Linux-nvme
mailing list