nvmf/rdma host crash during heavy load and keep alive recovery

Steve Wise swise at opengridcomputing.com
Mon Aug 1 14:38:31 PDT 2016


> > On Fri, Jul 29, 2016 at 04:40:40PM -0500, Steve Wise wrote:
> > > Running many fio jobs on 10 NVMF/RDMA ram disks, and bringing down and
> back
> > up
> > > the interfaces in a loop uncovers this crash.  I'm not sure if this has
been
> > > reported/fixed?  I'm using the for-linus branch of linux-block + sagi's 5
> > > patches on the host.
> > >
> > > What this test tickles is keep-alive recovery in the presence of heavy
> > > raw/direct IO.  Before the crash there are logs of these logged, which is
> > > probably expected:
> >
> > With what fixes does this happen?  This looks pretty similar to an
> > issue you reported before.
> 
> As I said, I'm using the for-linus branch of the linux-block repo
> (git://git.kernel.dk/linux-block) + sagi's 5 recent patches.   So I should be
> using the latest and greatest, I think.  This problem was originally seen on
> nvmf-all.3 as well.  Perhaps I have reported this previously.  But now I'm
> trying to fix it :)
> 

I do have two different problem reports internally at Chelsio that both show the
same signature.  I found the other one :)  For the 2nd problem report, there was
no ifup/down to induce keep alive recovery.  It just loads up 10 ram disks on a
64 core host/target pair in a similar manner, and after a while lots of
nvme_rdma_post_send() errors are logged (probably due to a connection death) and
then the crash.   I'm still gathering info on that one, but it appears the qp
again was freed somehow and then attempts to post to it cause the crash...

Steve






More information about the Linux-nvme mailing list