nvmf/rdma host crash during heavy load and keep alive recovery

Steve Wise swise at opengridcomputing.com
Wed Sep 21 14:20:37 PDT 2016


> > >
> > > Oh.  Actually we'll probably need to take care of the connect_q just
> > > about anywhere we do anything to the other queues..
> >
> > Why should we?
> >
> > We control the IOs on the connect_q (we only submit connect to it) and
> > we only submit to it if our queue is established.
> >
> > I still don't see how this explains why Steves is seeing bogus
> > queue/hctx mappings...
> 
> I don't think I'm seeing bogus mappings necessarily.  I think my debug
code
> uncovered (to me at least) that connect_q hctx's use the same
> nvme_rdma_queues
> as the ioq hctxs.   And I thought that was not a valid configuration, but
> apparently its normal.  So I still don't know how/why a pending request
gets
> run
> on an nvme_rdma_queue that has blown away its rdma qp and cm_id.  It
> _could_ be
> due to queue/hctx bogus mappings, but I haven't proven it.  I'm not sure
> how to
> prove it (or how to further debug this issue)...

I added debug code to save off the 2 blk_mq_hw_ctx pointers that get
associated with each nvme_rdma_queue.  This allows me to assert that the
hctx passed into nvme_rdma_queue_rq() is not bogus.  And indeed the hctx
passed in during the crash is the correct hctx.  So we know there isn't a
problem with a bogus hctx being used.

The hctx.state has BLK_MQ_S_TAG_ACTIVE set and _not_ BLK_MQ_S_STOPPED.  The
ns->queue->queue_flags has QUEUE_FLAG_STOPPED bit set.  So the blk_mq queue
is active and the nvme queue is STOPPED.  I don't know how it gets in this
state...

Steve.




More information about the Linux-nvme mailing list