nvmf/rdma host crash during heavy load and keep alive recovery

Steve Wise swise at opengridcomputing.com
Mon Sep 19 08:38:46 PDT 2016


> >> This stack is creating hctx queues for the namespace created for this
target
> >> device.
> >>
> >> Sagi,
> >>
> >> Should nvme_rdma_error_recovery_work() be stopping the hctx queues for
> >> ctrl->ctrl.connect_q too?
> >
> > Oh.  Actually we'll probably need to take care of the connect_q just
> > about anywhere we do anything to the other queues..
> 
> Why should we?
> 
> We control the IOs on the connect_q (we only submit connect to it) and
> we only submit to it if our queue is established.
> 
> I still don't see how this explains why Steves is seeing bogus
> queue/hctx mappings...

I don't think I'm seeing bogus mappings necessarily.  I think my debug code
uncovered (to me at least) that connect_q hctx's use the same nvme_rdma_queues
as the ioq hctxs.   And I thought that was not a valid configuration, but
apparently its normal.  So I still don't know how/why a pending request gets run
on an nvme_rdma_queue that has blown away its rdma qp and cm_id.  It _could_ be
due to queue/hctx bogus mappings, but I haven't proven it.  I'm not sure how to
prove it (or how to further debug this issue)...





More information about the Linux-nvme mailing list