[PATCH 2/5] blk-mq: rename hctx_lock & hctx_unlock
Sagi Grimberg
sagi at grimberg.me
Mon Nov 22 05:50:14 PST 2021
On 11/22/21 3:20 PM, Ming Lei wrote:
> On Mon, Nov 22, 2021 at 09:53:53AM +0200, Sagi Grimberg wrote:
>>
>>> -static inline void hctx_unlock(struct blk_mq_hw_ctx *hctx, int srcu_idx)
>>> - __releases(hctx->srcu)
>>> +static inline void queue_unlock(struct request_queue *q, bool blocking,
>>> + int srcu_idx)
>>> + __releases(q->srcu)
>>> {
>>> - if (!(hctx->flags & BLK_MQ_F_BLOCKING))
>>> + if (!blocking)
>>> rcu_read_unlock();
>>> else
>>> - srcu_read_unlock(hctx->queue->srcu, srcu_idx);
>>> + srcu_read_unlock(q->srcu, srcu_idx);
>>
>> Maybe instead of passing blocking bool just look at srcu_idx?
>>
>> if (srcu_idx < 0)
>> rcu_read_unlock();
>> else
>> srcu_read_unlock(q->srcu, srcu_idx);
>
> This way needs to initialize srcu_idx in each callers.
Then look at q->has_srcu that Bart suggested?
>
>>
>> Or look if the queue has srcu allocated?
>>
>> if (!q->srcu)
>> rcu_read_unlock();
>> else
>> srcu_read_unlock(q->srcu, srcu_idx);
>
> This way is worse since q->srcu may involve one new cacheline fetch.
>
> hctx->flags is always hot, so it is basically zero cost to check it.
Yea, but the interface is awkward that the caller tells the
routine how it should lock/unlock...
More information about the Linux-nvme
mailing list