[PATCH 2/5] blk-mq: rename hctx_lock & hctx_unlock

Ming Lei ming.lei at redhat.com
Mon Nov 22 16:08:31 PST 2021


On Mon, Nov 22, 2021 at 03:50:14PM +0200, Sagi Grimberg wrote:
> 
> 
> On 11/22/21 3:20 PM, Ming Lei wrote:
> > On Mon, Nov 22, 2021 at 09:53:53AM +0200, Sagi Grimberg wrote:
> > > 
> > > > -static inline void hctx_unlock(struct blk_mq_hw_ctx *hctx, int srcu_idx)
> > > > -	__releases(hctx->srcu)
> > > > +static inline void queue_unlock(struct request_queue *q, bool blocking,
> > > > +		int srcu_idx)
> > > > +	__releases(q->srcu)
> > > >    {
> > > > -	if (!(hctx->flags & BLK_MQ_F_BLOCKING))
> > > > +	if (!blocking)
> > > >    		rcu_read_unlock();
> > > >    	else
> > > > -		srcu_read_unlock(hctx->queue->srcu, srcu_idx);
> > > > +		srcu_read_unlock(q->srcu, srcu_idx);
> > > 
> > > Maybe instead of passing blocking bool just look at srcu_idx?
> > > 
> > > 	if (srcu_idx < 0)
> > > 		rcu_read_unlock();
> > > 	else
> > > 		srcu_read_unlock(q->srcu, srcu_idx);
> > 
> > This way needs to initialize srcu_idx in each callers.
> 
> Then look at q->has_srcu that Bart suggested?

Bart just suggested to rename q->alloc_srcu as q->has_srcu.

> 
> > 
> > > 
> > > Or look if the queue has srcu allocated?
> > > 
> > > 	if (!q->srcu)
> > > 		rcu_read_unlock();
> > > 	else
> > > 		srcu_read_unlock(q->srcu, srcu_idx);
> > 
> > This way is worse since q->srcu may involve one new cacheline fetch.
> > 
> > hctx->flags is always hot, so it is basically zero cost to check it.
> 
> Yea, but the interface is awkward that the caller tells the
> routine how it should lock/unlock...

If the two helpers are just blk-mq internal, I think it is fine to keep
this way with comment.

If driver needs the two exported, they should be often used in slow path, then
it is fine to refine the interface type.


Thanks,
Ming




More information about the Linux-nvme mailing list