crash on device removal

Steve Wise swise at opengridcomputing.com
Tue Jul 12 15:17:03 PDT 2016


> On Tue, 2016-07-12 at 16:09 -0500, Steve Wise wrote:
> > > On Tue, 2016-07-12 at 11:34 -0500, Steve Wise wrote:
> > > > Hey Christoph,
> > > >
> > > > I see a crash when shutting down a nvme host node via 'reboot' that has 1
> target
> > > > device attached.  The shutdown causes iw_cxgb4 to be removed which
> triggers
> > > the
> > > > device removal logic in the nvmf rdma transport.  The crash is here:
> > > >
> > > > (gdb) list *nvme_rdma_free_qe+0x18
> > > > 0x1e8 is in nvme_rdma_free_qe (drivers/nvme/host/rdma.c:196).
> > > > 191     }
> > > > 192
> > > > 193     static void nvme_rdma_free_qe(struct ib_device *ibdev, struct
> > > > nvme_rdma_qe *qe,
> > > > 194                     size_t capsule_size, enum dma_data_direction dir)
> > > > 195     {
> > > > 196             ib_dma_unmap_single(ibdev, qe->dma, capsule_size, dir);
> > > > 197             kfree(qe->data);
> > > > 198     }
> > > > 199
> > > > 200     static int nvme_rdma_alloc_qe(struct ib_device *ibdev, struct
> > > > nvme_rdma_qe *qe,
> > > >
> > > > Apparently qe is NULL.
> > > >
> > > > Looking at the device removal path, the logic appears correct (see
> > > > nvme_rdma_device_unplug() and the nice function comment :) ).  I'm
> wondering
> > > if
> > > > concurrently to the host device removal path cleaning up queues, the target is
> > > > disconnecting all of its queues due to the first disconnect event from the host
> > > > causing some cleanup race on the host side?  Although since the removal path
> > > > executing in the cma event handler upcall, I don't think another thread would
> be
> > > > handling a disconnect event.  Maybe the qp async event handler flow?
> > > >
> > > > Thoughts?
> > >
> > > We actually missed a kref_get in nvme_get_ns_from_disk().
> > >
> > > This should fix it. Could you help to verify?
> > >
> > > diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> > > index 4babdf0..b146f52 100644
> > > --- a/drivers/nvme/host/core.c
> > > +++ b/drivers/nvme/host/core.c
> > > @@ -183,6 +183,8 @@ static struct nvme_ns *nvme_get_ns_from_disk(struct
> > > gendisk *disk)
> > >  	}
> > >  	spin_unlock(&dev_list_lock);
> > >
> > > +	kref_get(&ns->ctrl->kref);
> > > +
> > >  	return ns;
> > >
> > >  fail_put_ns:
> >
> > Hey Ming.  This avoids the crash in nvme_rdma_free_qe(), but now I see another
> crash:
> >
> > [  975.633436] nvme nvme0: new ctrl: NQN "nqn.2014-
> 08.org.nvmexpress.discovery", addr 10.0.1.14:4420
> > [  978.463636] nvme nvme0: creating 32 I/O queues.
> > [  979.187826] nvme nvme0: new ctrl: NQN "testnqn", addr 10.0.1.14:4420
> > [  987.778287] nvme nvme0: Got rdma device removal event, deleting ctrl
> > [  987.882202] BUG: unable to handle kernel paging request at ffff880e770e01f8
> > [  987.890024] IP: [<ffffffffa03a1a46>] __ib_process_cq+0x46/0xc0 [ib_core]
> >
> > This looks like another problem with freeing the tag sets before stopping the QP.
> I thought we fixed that once and for all, but perhaps there is some other path we
> missed. :(
> 
> Sorry, the previous patch was wrong.
> Here is the right one.
> 
> diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
> index 1ad47c5..f13e3a6 100644
> --- a/drivers/nvme/host/fabrics.c
> +++ b/drivers/nvme/host/fabrics.c
> @@ -845,6 +845,7 @@ static ssize_t nvmf_dev_write(struct file *file, const char
> __user *ubuf,
>  		goto out_unlock;
>  	}
> 
> +	kref_get(&ctrl->kref);
>  	seq_file->private = ctrl;
> 
>  out_unlock:

I still see the ib_poll_cq crash with this patch.  I'm adding new debug to figure out the flush issue...





More information about the Linux-nvme mailing list