[PATCH] nvme-fabrics: fix crash for no IO queues

Keith Busch kbusch at kernel.org
Tue Mar 16 02:02:29 GMT 2021


On Tue, Mar 16, 2021 at 09:23:21AM +0800, Chao Leng wrote:
> 
> 
> On 2021/3/16 1:08, Sagi Grimberg wrote:
> > 
> > > > > A crash happens when set feature(NVME_FEAT_NUM_QUEUES) timeout in nvme
> > > > > over rdma(roce) reconnection, the reason is use the queue which is not
> > > > > alloced.
> > > > > 
> > > > > If queue is not live, should not allow queue request.
> > > > 
> > > > Can you describe exactly the scenario here? What is the state
> > > > here? LIVE? or DELETING?
> > > If seting feature(NVME_FEAT_NUM_QUEUES) failed due to time out or
> > > the target return 0 io queues, nvme_set_queue_count will return 0,
> > > and then reconnection will continue and success. The state of controller
> > > is LIVE. The request will continue to deliver by call ->queue_rq(),
> > > and then crash happens.
> > 
> > Thinking about this again, we should absolutely fail the reconnection
> > when we are unable to set any I/O queues, it is just wrong to
> > keep this controller alive...
> Keith think keeping the controller alive for diagnose is better.
> This is the patch which failed the connection.
> https://lore.kernel.org/linux-nvme/20210223072602.3196-1-lengchao@huawei.com/
> 
> Now we have 2 choice:
> 1.failed the connection when unable to set any I/O queues.
> 2.do not allow queue request when queue is not live.

Okay, so there are different views on how to handles this. I personally find
in-band administration for a misbehaving device is a good thing to have, but I
won't 'nak' if the consensus from the people using this is for the other way.
 
> From a service continuity perspective, I think it is better that failed
> the connection when unable to set any I/O queues.
> Diagnose is less important, I prefer service continuity, because if failed
> the reconnection and then try new reconnection, it is possible to recover.



More information about the Linux-nvme mailing list