[PATCH 0/3] improve nvme quiesce time for large amount of namespaces

Chao Leng lengchao at huawei.com
Mon Oct 10 01:46:21 PDT 2022



On 2022/8/2 21:38, Christoph Hellwig wrote:
> On Sun, Jul 31, 2022 at 01:23:36PM +0300, Sagi Grimberg wrote:
>> But maybe we can avoid that, and because we allocate
>> the connect_q ourselves, and fully know that it should
>> not be apart of the tagset quiesce, perhaps we can introduce
>> a new interface like:
>> --
>> static inline int nvme_ctrl_init_connect_q(struct nvme_ctrl *ctrl)
>> {
>> 	ctrl->connect_q = blk_mq_init_queue_self_quiesce(ctrl->tagset);
>> 	if (IS_ERR(ctrl->connect_q))
>> 		return PTR_ERR(ctrl->connect_q);
>> 	return 0;
>> }
>> --
>>
>> And then blk_mq_quiesce_tagset can simply look into a per request-queue
>> self_quiesce flag and skip as needed.
> 
> I'd just make that a queue flag set after allocation to keep the
> interface simple, but otherwise this seems like the right thing
> to do.
Now the code used NVME_NS_STOPPED to avoid unpaired stop/start.
If we use blk_mq_quiesce_tagset, It will cause the above mechanism to fail.
I review the code, only pci can not ensure secure stop/start pairing.
So there is a choice, We only use blk_mq_quiesce_tagset on fabrics, not PCI.
Do you think that's acceptable?
If that's acceptable, I will try to send a patch set.
> 
> .
> 



More information about the Linux-nvme mailing list