I/O Errors due to keepalive timeouts with NVMf RDMA

Sagi Grimberg sagi at grimberg.me
Mon Jul 10 04:41:28 PDT 2017


> Host:
> [353698.784927] nvme nvme0: creating 44 I/O queues.
> [353699.572467] nvme nvme0: new ctrl: NQN
> "nqn.2014-08.org.nvmexpress:NVMf:uuid:c36f2c23-354d-416c-95de-f2b8ec353a82",
> addr 1.1.1.2:4420
> [353960.804750] nvme nvme0: SEND for CQE 0xffff88011c0cca58 failed with status
> transport retry counter exceeded (12)

Exhausted retries, wow... That is really strange...

Host sent the keep-alive and it never made it to the host, the HCA
retried for 7+ times and gave up.

Are you running with a switch? which one? is the switch experience
higher ingress?

> [353960.840895] nvme nvme0: Reconnecting in 10 seconds...
> [353960.853582] blk_update_request: I/O error, dev nvme0n1, sector 14183280
> [353960.869599] blk_update_request: I/O error, dev nvme0n1, sector 32251848
> [353960.869601] blk_update_request: I/O error, dev nvme0n1, sector 3500872
> [353960.869602] blk_update_request: I/O error, dev nvme0n1, sector 3266216
> [353960.869603] blk_update_request: I/O error, dev nvme0n1, sector 12926288
> [353960.869607] blk_update_request: I/O error, dev nvme0n1, sector 27661040
> [353960.869609] blk_update_request: I/O error, dev nvme0n1, sector 32564280
> [353960.869610] blk_update_request: I/O error, dev nvme0n1, sector 12912072
> [353960.869611] blk_update_request: I/O error, dev nvme0n1, sector 16570728
> [353960.869613] blk_update_request: I/O error, dev nvme0n1, sector 33096144
> [353961.036738] nvme0n1: detected capacity change from 68719476736 to
> -67526893324191744
> [353961.055986] Buffer I/O error on dev nvme0n1, logical block 0, async page
> read
> [353961.073360] Buffer I/O error on dev nvme0n1, logical block 0, async page
> read
> [353961.090572] Buffer I/O error on dev nvme0n1, logical block 0, async page
> read
> [353961.090575] ldm_validate_partition_table(): Disk read failed.
> [353961.090578] Buffer I/O error on dev nvme0n1, logical block 0, async page
> read
> [353961.090582] Buffer I/O error on dev nvme0n1, logical block 0, async page
> read
> [353961.090585] Buffer I/O error on dev nvme0n1, logical block 0, async page
> read
> [353961.090589] Buffer I/O error on dev nvme0n1, logical block 0, async page
> read
> [353961.090593] Buffer I/O error on dev nvme0n1, logical block 0, async page
> read
> [353961.090598] Buffer I/O error on dev nvme0n1, logical block 3, async page
> read
> [353961.090602] Buffer I/O error on dev nvme0n1, logical block 0, async page
> read
> [353961.090607]  nvme0n1: unable to read partition table
> [353973.021283] nvme nvme0: rdma_resolve_addr wait failed (-104).
> [353973.048717] nvme nvme0: Failed reconnect attempt 1
> [353973.060073] nvme nvme0: Reconnecting in 10 seconds...
> [353983.101337] nvme nvme0: rdma_resolve_addr wait failed (-104).
> [353983.128739] nvme nvme0: Failed reconnect attempt 2
> [353983.140280] nvme nvme0: Reconnecting in 10 seconds...
> [353993.181354] nvme nvme0: rdma_resolve_addr wait failed (-104).
> [353993.208714] nvme nvme0: Failed reconnect attempt 3
> [353993.208716] nvme nvme0: Reconnecting in 10 seconds...
> [354003.229292] nvme nvme0: rdma_resolve_addr wait failed (-104).
> [354003.256712] nvme nvme0: Failed reconnect attempt 4
> [354003.268189] nvme nvme0: Reconnecting in 10 seconds...
> [354013.309211] nvme nvme0: rdma_resolve_addr wait failed (-104).
> [354013.336695] nvme nvme0: Failed reconnect attempt 5
> [354013.348043] nvme nvme0: Reconnecting in 10 seconds...
> [354023.389262] nvme nvme0: rdma_resolve_addr wait failed (-104).
> [354023.416682] nvme nvme0: Failed reconnect attempt 6
> [354023.428021] nvme nvme0: Reconnecting in 10 seconds...

And why aren't you able to reconnect?

Something smells mis-configured here...



More information about the Linux-nvme mailing list