nvme/rdma initiator stuck on reboot

Steve Wise swise at opengridcomputing.com
Thu Aug 18 12:11:00 PDT 2016


> > > Btw, in that case the patch is not actually correct, as even workqueue
> > > with a higher concurrency level MAY deadlock under enough memory
> > > pressure.  We'll need separate workqueues to handle this case I think.
> > >
> > > > Yes?  And the
> > > > reconnect worker was never completing?  Why is that?  Here are a few
> tidbits
> > > > about iWARP connections:  address resolution == neighbor discovery.  So
if
> > the
> > > > neighbor is unreachable, it will take a few seconds for the OS to give
up
> > and
> > > > fail the resolution.  If the neigh entry is valid and the peer becomes
> > > > unreachable during connection setup, it might take 60 seconds or so for
a
> > > > connect operation to give up and fail.  So this is probably slowing the
> > > > reconnect thread down.   But shouldn't the reconnect thread notice that
a
> > delete
> > > > is trying to happen and bail out?
> > >
> > > I think we should aim for a state machine that can detect this, but
> > > we'll have to see if that will end up in synchronization overkill.
> >
> > Looking at the state machine I don't see why the reconnect thread would get
> > stuck continually rescheduling once the controller was deleted.  Changing
from
> > RECONNECTING to DELETING will be done by nvme_change_ctrl_state().  So once
> > that
> > happens, in __nvme_rdma_del_ctrl() , the thread running reconnect logic
should
> > stop rescheduling due to this in the failure logic of
> > nvme_rdma_reconnect_ctrl_work():
> >
> > ...
> > requeue:
> >         /* Make sure we are not resetting/deleting */
> >         if (ctrl->ctrl.state == NVME_CTRL_RECONNECTING) {
> >                 dev_info(ctrl->ctrl.device,
> >                         "Failed reconnect attempt, requeueing...\n");
> >                 queue_delayed_work(nvme_rdma_wq, &ctrl->reconnect_work,
> >                                         ctrl->reconnect_delay * HZ);
> >         }
> > ...
> >
> > So something isn't happening like I think it is, I guess.
> 
> 
> I see what happens.  Assume the 10 controllers are reconnecting and failing,
> thus they reschedule each time.  I then run a script to delete all 10 devices
> sequentially.  Like this:
> 
> for i in $(seq 1 10); do nvme disconnect -d nvme${i}n1; done
> 
> The first device, nvme1n1 gets a disconnect/delete command and changes the
> controller state from RECONNECTING to DELETING, and then schedules
> nvme_rdma_del_ctrl_work(), but that is stuck behind the 9 others continually
> reconnecting, failing, and rescheduling.  I'm not sure why the delete never
gets
> run though?  I would think if it is scheduled, then it would get executed
before
> the reconnect threads rescheduling?  Maybe we need some round-robin mode for
> our
> workq?  And because the first delete is stuck, none of the subsequent delete
> commands get executed.  Note: If I run each disconnect command in the
> background, then they all get cleaned up ok.   Like this:
> 
> for i in $(seq 1 10); do nvme disconnect -d nvme${i}n1 & done
> 
> 

Experimenting more, running the 'nvme disconnect's in the back ground doesn't
really avoid things getting stuck.  

BTW: I'm running these with the single threaded workq, to understand the
deadlock. (well, trying to understand...)






More information about the Linux-nvme mailing list