Sequential read from NVMe/XFS twice slower on Fedora 42 than on Rocky 9.5

Anton Gavriliuk antosha20xx at gmail.com
Sat May 3 14:04:16 PDT 2025


There are 12 Kioxia CM-7 NVMe SSDs configured in mdadm/raid0 and
mounted to /mnt.

Exactly the same fio command running under Fedora 42
(6.14.5-300.fc42.x86_64) and then under Rocky 9.5
(5.14.0-503.40.1.el9_5.x86_64) shows twice the performance difference.

/mnt/testfile size 1TB
server's total dram 192GB

Fedora 42

[root at localhost ~]# fio --name=test --rw=read --bs=256k
--filename=/mnt/testfile --direct=1 --numjobs=1 --iodepth=64 --exitall
--group_reporting --ioengine=libaio --runtime=30 --time_based
test: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T)
256KiB-256KiB, ioengine=libaio, iodepth=64
fio-3.39-44-g19d9
Starting 1 process
Jobs: 1 (f=1): [R(1)][100.0%][r=49.6GiB/s][r=203k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=2465: Sat May  3 17:51:24 2025
  read: IOPS=203k, BW=49.6GiB/s (53.2GB/s)(1487GiB/30001msec)
    slat (usec): min=3, max=1053, avg= 4.60, stdev= 1.76
    clat (usec): min=104, max=4776, avg=310.53, stdev=29.49
     lat (usec): min=110, max=4850, avg=315.13, stdev=29.82

Rocky 9.5

[root at localhost ~]# fio --name=test --rw=read --bs=256k
--filename=/mnt/testfile --direct=1 --numjobs=1 --iodepth=64 --exitall
--group_reporting --ioengine=libaio --runtime=30 --time_based
test: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T)
256KiB-256KiB, ioengine=libaio, iodepth=64
fio-3.39-44-g19d9
Starting 1 process
Jobs: 1 (f=1): [R(1)][100.0%][r=96.0GiB/s][r=393k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=15467: Sun May  4 00:00:39 2025
  read: IOPS=390k, BW=95.3GiB/s (102GB/s)(2860GiB/30001msec)
    slat (nsec): min=1111, max=183816, avg=2117.94, stdev=1412.34
    clat (usec): min=81, max=1086, avg=161.60, stdev=19.67
     lat (usec): min=82, max=1240, avg=163.72, stdev=19.73

Anton



More information about the Linux-nvme mailing list