[RFC blktests v1 00/10] Add support to run against real target

Daniel Wagner dwagner at suse.de
Mon Mar 18 02:38:45 PDT 2024


As preperation for the planed discussion during LSFMM on

 - running blktest against real hardare/target[1]

I've played a bit around with this idea. It was fairly simple to get it going,
because all the NVMEeoF tests use the common setup/cleanup helpers and allow an
external script to run instead. I've wrote a simple Python script, which then
forwards the setup/cleanup requests to nvmetcli with Hannes' rpc changes [2].

Thus, I still run blktests against a Linux soft target over TCP. This already
uncovered an issue with xfs formatted disk. The test passes if the disk
formatted with btrfs. It seems worthwhile to extend these tests, as it is able
detect new problems:

  Running nvme/012
  umount: /mnt/blktests: not mounted.
  meta-data=/dev/nvme0n1           isize=512    agcount=4, agsize=327680 blks
           =                       sectsz=512   attr=2, projid32bit=1
           =                       crc=1        finobt=1, sparse=1, rmapbt=1
           =                       reflink=1    bigtime=1 inobtcount=1 nrext64=1
  data     =                       bsize=4096   blocks=1310720, imaxpct=25
           =                       sunit=0      swidth=0 blks
  naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
  log      =internal log           bsize=4096   blocks=16384, version=2
           =                       sectsz=512   sunit=0 blks, lazy-count=1
  realtime =none                   extsz=4096   blocks=0, rtextents=0
  Discarding blocks...Done.
  mount: /mnt/blktests: mount(2) system call failed: Structure needs cleaning.
         dmesg(1) may have more information after failed mount system call.
  
With btrfs:

  Running nvme/012
  umount: /mnt/blktests: not mounted.
  btrfs-progs v6.7
  See https://btrfs.readthedocs.io for more information.
  
  Performing full device TRIM /dev/nvme0n1 (5.00GiB) ...
  NOTE: several default settings have changed in version 5.15, please make sure
        this does not affect your deployments:
        - DUP for metadata (-m dup)
        - enabled no-holes (-O no-holes)
        - enabled free-space-tree (-R free-space-tree)
  
  Label:              (null)
  UUID:               7f0e210f-907b-4b87-9a98-fab0d9d60c56
  Node size:          16384
  Sector size:        4096
  Filesystem size:    5.00GiB
  Block group profiles:
    Data:             single            8.00MiB
    Metadata:         DUP             256.00MiB
    System:           DUP               8.00MiB
  SSD detected:       yes
  Zoned device:       no
  Incompat features:  extref, skinny-metadata, no-holes, free-space-tree
  Runtime features:   free-space-tree
  Checksum:           crc32c
  Number of devices:  1
  Devices:
     ID        SIZE  PATH
      1     5.00GiB  /dev/nvme0n1
  

I am still running a bit older verion of the kernel (v6.8-rc3); this might
be fixed already.


  nvme/002 (create many subsystems and test discovery)         [not run]
      nvme_trtype=tcp is not supported in this test
  nvme/003 (test if we're sending keep-alives to a discovery controller) [passed]
      runtime  12.696s  ...  12.943s
  nvme/004 (test nvme and nvmet UUID NS descriptors)           [passed]
      runtime  2.895s  ...  2.765s
  nvme/005 (reset local loopback target)                       [passed]
      runtime  2.961s  ...  2.929s
  nvme/006 (create an NVMeOF target with a block device-backed ns) [passed]
      runtime  1.389s  ...  1.324s
  nvme/007 (create an NVMeOF target with a file-backed ns)     [passed]
      runtime  1.338s  ...  1.337s
  nvme/008 (create an NVMeOF host with a block device-backed ns) [passed]
      runtime  2.797s  ...  2.764s
  nvme/009 (create an NVMeOF host with a file-backed ns)       [passed]
      runtime  2.804s  ...  2.775s
  nvme/010 (run data verification fio job on NVMeOF block device-backed ns) [passed]
      runtime  21.120s  ...  40.042s
  nvme/011 (run data verification fio job on NVMeOF file-backed ns) [passed]
      runtime  39.702s  ...  40.838s
  nvme/012 (run mkfs and data verification fio job on NVMeOF block device-backed ns) [failed]
      runtime  157.538s  ...  3.170s
      --- tests/nvme/012.out      2023-11-28 12:59:52.704838920 +0100
      +++ /home/wagi/work/blktests/results/nodev/nvme/012.out.bad 2024-03-18 10:04:38.572222634 +0100
      @@ -1,3 +1,4 @@
       Running nvme/012
      +FAIL: fio verify failed
       disconnected 1 controller(s)
       Test complete
  [...]
  
  
[1] https://lore.kernel.org/linux-nvme/23fhu43orn5yyi6jytsyez3f3d7liocp4cat5gfswtan33m3au@iyxhcwee6wvk/
[2] https://github.com/hreinecke/nvmetcli/tree/rpc

Daniel Wagner (10):
  common/xfs: propagate errors from _xfs_run_fio_verify_io
  nvme/{012,013,035}: check return value of _xfs_run_fio_verify_io
  nvme/rc: use long command line option for nvme
  nvme/{014,015,018,019,020,023,024,026,045,046}: use long command line
    option for nvme
  nvme/rc: connect subsys only support long options
  nvme/rc: remove unused connect options
  nvme/rc: add nqn/uuid args to target setup/cleanup helper
  nvme/031: do not open code target setup/cleanup
  nvme/rc: introduce remote target support
  nvme/030: only run against kernel soft target

 common/xfs     |   9 +++-
 tests/nvme/012 |   4 +-
 tests/nvme/013 |   4 +-
 tests/nvme/014 |   2 +-
 tests/nvme/015 |   2 +-
 tests/nvme/018 |   3 +-
 tests/nvme/019 |   3 +-
 tests/nvme/020 |   3 +-
 tests/nvme/023 |   3 +-
 tests/nvme/024 |   3 +-
 tests/nvme/026 |   3 +-
 tests/nvme/030 |   1 +
 tests/nvme/031 |  10 ++--
 tests/nvme/035 |   4 +-
 tests/nvme/045 |   6 +--
 tests/nvme/046 |   7 +--
 tests/nvme/rc  | 142 ++++++++++++++++++++++++++++++++++---------------
 17 files changed, 141 insertions(+), 68 deletions(-)

-- 
2.44.0




More information about the Linux-nvme mailing list