nvme-fabrics: crash at nvme connect-all
Steve Wise
swise at opengridcomputing.com
Thu Jun 9 06:36:49 PDT 2016
> > Steve, did you see this before? I'm wandering if we need some sort
> > of logic handling with resource limitation in iWARP (global mrs pool...)
>
> Haven't seen this. Does 'cat /sys/kernel/debug/iw_cxgb4/blah/stats' show
> anything interesting? Where/why is it crashing?
>
So this is the failure:
[ 703.239462] rdma_rw_init_mrs: failed to allocated 128 MRs
[ 703.239498] failed to init MR pool ret= -12
[ 703.239541] nvmet_rdma: failed to create_qp ret= -12
[ 703.239582] nvmet_rdma: nvmet_rdma_alloc_queue: creating RDMA queue failed
(-12).
Not sure why it would fail. I would think my setup would be allocating more
given I have 16 cores on the host and target. The debugfs "stats" file I
mentioned above should show us something if we're running out of adapter
resources for MR or PBL records.
Can you please turn on c4iw_debug and send me the debug output? echo 1 >
/sys/module/iw_cxgb4/parameters/c4iw_debug.
Thanks,
Steve.
More information about the Linux-nvme
mailing list