[PATCH blktests v6 0/2] test queue count changes on reconnect

Daniel Wagner dwagner at suse.de
Thu Apr 6 01:30:48 PDT 2023


The target is allowed to change the number of i/o queues. Test if the
host is able to reconnect in this scenario.

I've incorperated Chaitanya's feedback and also optimized the runtime for the
good and the bad case (rdma started to fail today, something to fix). Before it
run for rougly 25 seconds when everytying was okay.

  # nvme_trtype=rdma ./check nvme/048
  nvme/048 (Test queue count changes on reconnect)             [failed]
      runtime  6.488s  ...  6.404s
      --- tests/nvme/048.out      2023-04-06 09:38:29.574194562 +0200
      +++ /home/wagi/work/blktests/results/nodev/nvme/048.out.bad 2023-04-06 10:09:47.692036702 +0200
      @@ -1,3 +1,11 @@
       Running nvme/048
      -NQN:blktests-subsystem-1 disconnected 1 controller(s)
      +grep: /sys/class/nvme-fabrics/ctl//state: No such file or directory
      +grep: /sys/class/nvme-fabrics/ctl//state: No such file or directory
      +grep: /sys/class/nvme-fabrics/ctl//state: No such file or directory
      +grep: /sys/class/nvme-fabrics/ctl//state: No such file or directory
      +grep: /sys/class/nvme-fabrics/ctl//state: No such file or directory
      ...
      (Run 'diff -u tests/nvme/048.out /home/wagi/work/blktests/results/nodev/nvme/048.out.bad' to see the entire diff)
  # nvme_trtype=tcp ./check nvme/048
  nvme/048 (Test queue count changes on reconnect)             [passed]
      runtime  6.350s  ...  6.227s
  
This version is based on my previous posted nvme/047 patches [1]

[1] https://lore.kernel.org/linux-nvme/20230329090202.8351-1-dwagner@suse.de/

v6:
 - moved generic rc bits back into test case
 - added checks to fail early
 - added timeout values parser for connect call
 - reduced timeouts (runtime reduction for good and bad case)
 - fixed shellcheck warnings
v5:
 - moved generic parts to nvme/rc
 - renamed test to 048
 - rebased ontop of nvme/047
 - https://lore.kernel.org/linux-nvme/20230405154630.16298-1-dwagner@suse.de/
v4:
 - do not remove ports instead depend on host removing
   controllers, see
   https://lore.kernel.org/linux-nvme/20220927143157.3659-1-dwagner@suse.de/
 - https://lore.kernel.org/linux-nvme/20220927143719.4214-1-dwagner@suse.de/
v3:
 - Added comment why at least 2 CPUs are needed for the test
 - Fixed shell quoting in _set_nvmet_attr_qid_max
 - https://lore.kernel.org/linux-nvme/20220913065758.134668-1-dwagner@suse.de/
v2:
 - detect if attr_qid_max is available
 - https://lore.kernel.org/linux-block/20220831153506.28234-1-dwagner@suse.de/
v1:
 - https://lore.kernel.org/linux-block/20220831120900.13129-1-dwagner@suse.de/

Daniel Wagner (2):
  nvme/rc: Add timeout argument parsing to _nvme_connect_subsys()
  nvme/048: test queue count changes on reconnect

 tests/nvme/048     | 125 +++++++++++++++++++++++++++++++++++++++++++++
 tests/nvme/048.out |   3 ++
 tests/nvme/rc      |  24 +++++++++
 3 files changed, 152 insertions(+)
 create mode 100755 tests/nvme/048
 create mode 100644 tests/nvme/048.out

-- 
2.40.0




More information about the Linux-nvme mailing list