[bug report] NVMe/IB: reset_controller need more than 1min

Yi Zhang yi.zhang at redhat.com
Thu Dec 16 08:32:15 PST 2021


On Thu, Dec 16, 2021 at 9:21 PM Max Gurtovoy <mgurtovoy at nvidia.com> wrote:
>
>
> On 12/16/2021 4:18 AM, Yi Zhang wrote:
> > On Wed, Dec 15, 2021 at 8:10 PM Max Gurtovoy <mgurtovoy at nvidia.com> wrote:
> >>
> >> On 12/15/2021 3:15 AM, Yi Zhang wrote:
> >>> On Tue, Dec 14, 2021 at 8:01 PM Max Gurtovoy <mgurtovoy at nvidia.com> wrote:
> >>>> On 12/14/2021 12:39 PM, Sagi Grimberg wrote:
> >>>>>>>> Hi Sagi
> >>>>>>>> It is still reproducible with the change, here is the log:
> >>>>>>>>
> >>>>>>>> # time nvme reset /dev/nvme0
> >>>>>>>>
> >>>>>>>> real    0m12.973s
> >>>>>>>> user    0m0.000s
> >>>>>>>> sys     0m0.006s
> >>>>>>>> # time nvme reset /dev/nvme0
> >>>>>>>>
> >>>>>>>> real    1m15.606s
> >>>>>>>> user    0m0.000s
> >>>>>>>> sys     0m0.007s
> >>>>>>> Does it speed up if you use less queues? (i.e. connect with -i 4) ?
> >>>>>> Yes, with -i 4, it has stablee 1.3s
> >>>>>> # time nvme reset /dev/nvme0
> >>>>> So it appears that destroying a qp takes a long time on
> >>>>> IB for some reason...
> >>>>>
> >>>>>> real 0m1.225s
> >>>>>> user 0m0.000s
> >>>>>> sys 0m0.007s
> >>>>>>
> >>>>>>>> # dmesg | grep nvme
> >>>>>>>> [  900.634877] nvme nvme0: resetting controller
> >>>>>>>> [  909.026958] nvme nvme0: creating 40 I/O queues.
> >>>>>>>> [  913.604297] nvme nvme0: mapped 40/0/0 default/read/poll queues.
> >>>>>>>> [  917.600993] nvme nvme0: resetting controller
> >>>>>>>> [  988.562230] nvme nvme0: I/O 2 QID 0 timeout
> >>>>>>>> [  988.567607] nvme nvme0: Property Set error: 881, offset 0x14
> >>>>>>>> [  988.608181] nvme nvme0: creating 40 I/O queues.
> >>>>>>>> [  993.203495] nvme nvme0: mapped 40/0/0 default/read/poll queues.
> >>>>>>>>
> >>>>>>>> BTW, this issue cannot be reproduced on my NVME/ROCE environment.
> >>>>>>> Then I think that we need the rdma folks to help here...
> >>>>> Max?
> >>>> It took me 12s to reset a controller with 63 IO queues with 5.16-rc3+.
> >>>>
> >>>> Can you try repro with latest versions please ?
> >>>>
> >>>> Or give the exact scenario ?
> >>> Yeah, both target and client are using Mellanox Technologies MT27700
> >>> Family [ConnectX-4], could you try stress "nvme reset /dev/nvme0", the
> >>> first time reset will take 12s, and it always can be reproduced at the
> >>> second reset operation.
> >> I created a target with 1 namespace backed by null_blk and connected to
> >> it from the same server in loopback rdma connection using the ConnectX-4
> >> adapter.
> > Could you share your loop.json file so I can try it on my environment?
>
> {
>    "hosts": [],
>    "ports": [
>      {
>        "addr": {
>          "adrfam": "ipv4",
>          "traddr": "<ip>",
>          "treq": "not specified",
>          "trsvcid": "4420",
>          "trtype": "rdma"
>        },
>        "portid": 1,
>        "referrals": [],
>        "subsystems": [
>          "testsubsystem_0"
>        ]
>      }
>    ],
>    "subsystems": [
>      {
>        "allowed_hosts": [],
>        "attr": {
>          "allow_any_host": "1",
>          "cntlid_max": "65519",
>          "cntlid_min": "1",
>          "model": "Linux",
>          "serial": "3d83c78b76623f1d",
>          "version": "1.3"
>        },
>        "namespaces": [
>          {
>            "device": {
>              "nguid": "5b722b05-e9b6-542d-ba80-62010b57775d",
>              "path": "/dev/nullb0",
>              "uuid": "26ffc8ce-73b4-321d-9685-7d7a9872c460"
>            },
>            "enable": 1,
>            "nsid": 1
>          }
>        ],
>        "nqn": "testsubsystem_0"
>      }
>    ]
> }

Thanks, I reproduced it with your json file on one server with
loopback rdma connection:
# time nvme connect -t rdma -a 172.31.0.202 -s 4420 -n testsubsystem_0

real 0m4.557s
user 0m0.000s
sys 0m0.005s
# time nvme reset /dev/nvme0

real 0m13.176s
user 0m0.000s
sys 0m0.007s
# time nvme reset /dev/nvme0

real 1m16.577s
user 0m0.002s
sys 0m0.005s
# time nvme disconnect -n testsubsystem_0

NQN:testsubsystem_0 disconnected 1 controller(s)

real 1m11.541s
user 0m0.000s
sys 0m0.187s

#dmesg
[96600.362827] nvmet: creating nvm controller 1 for subsystem
testsubsystem_0 for NQN
nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0056-4d10-8030-b7c04f393432.
[96600.363038] nvme nvme0: creating 40 I/O queues.
[96604.905514] nvme nvme0: mapped 40/0/0 default/read/poll queues.
[96604.909161] nvme nvme0: new ctrl: NQN "testsubsystem_0", addr
172.31.0.202:4420
[96614.270825] nvme nvme0: Removing ctrl: NQN "testsubsystem_0"
[96659.268006] nvmet: creating nvm controller 1 for subsystem
testsubsystem_0 for NQN
nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0056-4d10-8030-b7c04f393432.
[96659.268215] nvme nvme0: creating 40 I/O queues.
[96663.801929] nvme nvme0: mapped 40/0/0 default/read/poll queues.
[96663.805589] nvme nvme0: new ctrl: NQN "testsubsystem_0", addr
172.31.0.202:4420
[96673.130986] nvme nvme0: resetting controller
[96681.761992] nvmet: creating nvm controller 1 for subsystem
testsubsystem_0 for NQN
nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0056-4d10-8030-b7c04f393432.
[96681.762133] nvme nvme0: creating 40 I/O queues.
[96686.302544] nvme nvme0: mapped 40/0/0 default/read/poll queues.
[96688.850272] nvme nvme0: resetting controller
[96697.336217] nvmet: ctrl 1 keep-alive timer (5 seconds) expired!
[96697.361231] nvmet: ctrl 1 fatal error occurred!
[96760.824363] nvme nvme0: I/O 25 QID 0 timeout
[96760.847531] nvme nvme0: Property Set error: 881, offset 0x14
[96760.885731] nvmet: creating nvm controller 1 for subsystem
testsubsystem_0 for NQN
nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0056-4d10-8030-b7c04f393432.
[96760.885879] nvme nvme0: creating 40 I/O queues.
[96765.423099] nvme nvme0: mapped 40/0/0 default/read/poll queues.
[96808.112730] nvme nvme0: Removing ctrl: NQN "testsubsystem_0"
[96816.632485] nvmet: ctrl 1 keep-alive timer (5 seconds) expired!
[96816.657537] nvmet: ctrl 1 fatal error occurred!
[96879.608632] nvme nvme0: I/O 12 QID 0 timeout
[96879.632104] nvme nvme0: Property Set error: 881, offset 0x14>
>
> >
> > And can you try it with two servers that both have CX-4? This should
> > be easier to reproduce it.
>
> I did this experiment. I have only a setup with 12 cores so I created 12
> nvmf queues.
>
> The reset took 4 seconds. The test did 100 loops of "nvme reset".
>
> I saw that you also complained on the disconnect flow so I assume the
> root cause is the same.

Yeah, I think so

>
> My disconnect took 2 seconds.
>
> My FW version is 12.28.2006.

Yeah, mine is same with yours.
# cat /sys/devices/pci0000\:00/0000\:00\:02.0/0000\:04\:00.0/infiniband/mlx5_0/fw_ver
12.28.2006


>
> >> I run a loop with the "nvme reset" command and it took me 4-5 secs to
> >> reset each time..
> >>
> >>
> >
>


-- 
Best Regards,
  Yi Zhang




More information about the Linux-nvme mailing list