[bug report] blktests nvme/005 lead kernel panic on the latest linux-block/for-next

Yi Zhang yi.zhang at redhat.com
Wed Aug 6 08:49:07 PDT 2025


Hello

I found this panic issue on the latest linux-block/for-next, please
help check it and let me know if you need any info/testing for it,
thanks

commit: 20c74c073217 (HEAD -> for-next, origin/for-next) Merge branch
'block-6.17' into for-next
reproducer: blktests nvme/loop or nvme/tcp nvme/005

console log:
[  341.092092] loop: module loaded
[  341.246981] run blktests nvme/005 at 2025-08-06 15:32:53
[  341.537716] loop0: detected capacity change from 0 to 2097152
[  341.594066] nvmet: adding nsid 1 to subsystem blktests-subsystem-1
[  341.679693] nvmet_tcp: enabling port 0 (127.0.0.1:4420)
[  341.931127] nvmet: Created nvm controller 1 for subsystem
blktests-subsystem-1 for NQN
nqn.2014-08.org.nvmexpress:uuid:0f01fb42-9f7f-4856-b0b3-51e60b8de349.
[  341.959026] nvme nvme1: creating 80 I/O queues.
[  342.105359] nvme nvme1: mapped 80/0/0 default/read/poll queues.
[  342.256079] nvme nvme1: new ctrl: NQN "blktests-subsystem-1", addr
127.0.0.1:4420, hostnqn:
nqn.2014-08.org.nvmexpress:uuid:0f01fb42-9f7f-4856-b0b3-51e60b8de349
[  342.850745] nvmet: Created nvm controller 2 for subsystem
blktests-subsystem-1 for NQN
nqn.2014-08.org.nvmexpress:uuid:0f01fb42-9f7f-4856-b0b3-51e60b8de349.
[  342.858886] nvme nvme1: creating 80 I/O queues.
[  343.254225] nvme nvme1: mapped 80/0/0 default/read/poll queues.
[  343.539107] nvme nvme1: Removing ctrl: NQN "blktests-subsystem-1"
[  343.711101] block nvme1n1: no available path - failing I/O
[  343.711476] block nvme1n1: no available path - failing I/O
[  343.711691] Buffer I/O error on dev nvme1n1, logical block 262143,
async page read
[  348.367529] Unable to handle ker
** replaying previous printk message **
[  348.367529] Unable to handle kernel paging request at virtual
address dfff800000000032
[  348.367589] KASAN: null-ptr-deref in range
[0x0000000000000190-0x0000000000000197]
[  348.367593] Mem abort info:
[  348.367595]   ESR = 0x0000000096000005
[  348.367597]   EC = 0x25: DABT (current EL), IL = 32 bits
[  348.367601]   SET = 0, FnV = 0
[  348.367603]   EA = 0, S1PTW = 0
[  348.367606]   FSC = 0x05: level 1 translation fault
[  348.367608] Data abort info:
[  348.367610]   ISV = 0, ISS = 0x00000005, ISS2 = 0x00000000
[  348.367612]   CM = 0, WnR = 0, TnD = 0, TagAccess = 0
[  348.367615]   GCS = 0, Overlay = 0, DirtyBit = 0, Xs = 0
[  348.367618] [dfff800000000032] address between user and kernel address ranges
[  348.367758] Internal error: Oops: 0000000096000005 [#1]  SMP
[  348.444121] Modules linked in: loop nvmet(-) rfkill sunrpc mlx5_ib
ib_uverbs macsec mgag200 acpi_ipmi ib_core ipmi_ssif arm_spe_pmu
i2c_algo_bit mlx5_fwctl fwctl ipmi_devintf ipmi_msghandler arm_cmn
arm_dmc620_pmu vfat fat arm_dsu_pmu cppc_cpufreq fuse xfs mlx5_core
nvme nvme_core mlxfw nvme_keyring ghash_ce tls sbsa_gwdt nvme_auth
hpwdt psample pci_hyperv_intf i2c_designware_platform xgene_hwmon
i2c_designware_core dm_mirror dm_region_hash dm_log dm_mod [last
unloaded: nvmet_tcp]
[  348.486730] CPU: 53 UID: 0 PID: 7580 Comm: modprobe Not tainted
6.16.0+ #3 PREEMPT_{RT,(full)}
[  348.495418] Hardware name: HPE ProLiant RL300 Gen11/ProLiant RL300
Gen11, BIOS 1.60 03/07/2024
[  348.504018] pstate: 00400009 (nzcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
[  348.510969] pc : kasan_byte_accessible+0xc/0x20
[  348.515492] lr : __kasan_check_byte+0x20/0x70
[  348.519839] sp : ffff8000c07679d0
[  348.523142] x29: ffff8000c07679d0 x28: ffff0800b09d29b0 x27: 0000000000000190
[  348.530269] x26: ffffd1b95babfe28 x25: 0000000000000000 x24: 0000000000000002
[  348.537395] x23: 0000000000000001 x22: 0000000000000000 x21: ffffd1b95b1e89dc
[  348.544521] x20: 0000000000000190 x19: 0000000000000190 x18: 0000000000000000
[  348.551647] x17: ffffd1b922a23714 x16: ffffd1b95bc6e288 x15: ffffd1b95b2e9248
[  348.558773] x14: ffffd1b922a236a8 x13: ffffd1b95d7d5c2c x12: ffff7000180ecf1b
[  348.565900] x11: 1ffff000180ecf1a x10: ffff7000180ecf1a x9 : 0000000000000035
[  348.573026] x8 : ffff07ffdb8e0000 x7 : 0000000000000000 x6 : ffffd1b95babfe28
[  348.580152] x5 : 0000000000000000 x4 : 0000000000000001 x3 : 0000000000000000
[  348.587278] x2 : 0000000000000000 x1 : dfff800000000000 x0 : 0000000000000032
[  348.594405] Call trace:
[  348.596840]  kasan_byte_accessible+0xc/0x20 (P)
[  348.601361]  lock_acquire.part.0+0x5c/0x2b8
[  348.605536]  lock_acquire+0x9c/0x190
[  348.609102]  down_write_nested+0x70/0xc0
[  348.613015]  __simple_recursive_removal+0x80/0x4b8
[  348.617797]  simple_recursive_removal+0x1c/0x30
[  348.622317]  debugfs_remove+0x60/0x90
[  348.625971]  nvmet_debugfs_subsys_free+0x3c/0x60 [nvmet]
[  348.631289]  nvmet_subsys_free+0x50/0x108 [nvmet]
[  348.635995]  nvmet_subsys_put+0x8c/0x100 [nvmet]
[  348.640614]  nvmet_exit_discovery+0x20/0x38 [nvmet]
[  348.645492]  nvmet_exit+0x1c/0x68 [nvmet]
[  348.649502]  __do_sys_delete_module.constprop.0+0x298/0x548
[  348.655065]  __arm64_sys_delete_module+0x38/0x58
[  348.659672]  invoke_syscall.constprop.0+0x78/0x1f0
[  348.664455]  do_el0_svc+0x164/0x1e0
[  348.667933]  el0_svc+0x54/0x180
[  348.671065]  el0t_64_sync_handler+0xa0/0xe8
[  348.675239]  el0t_64_sync+0x1ac/0x1b0
[  348.678892] Code: d65f03c0 d343fc00 d2d00001 f2fbffe1 (38616800)
[  348.684976] ---[ end trace 0000000000000000 ]---
[  348.689583] Kernel panic - not syncing: Oops: Fatal exception
[  348.695319] SMP: stopping secondary CPUs
[  348.699383] Kernel Offset: 0x51b8daeb0000 from 0xffff800080000000
[  348.705465] PHYS_OFFSET: 0x80000000
[  348.708941] CPU features: 0x10000,00002e00,048098a1,0441720b
[  348.714590] Memory Limit: none
[  348.892994] ---[ end Kernel panic - not syncing: Oops: Fatal exception ]---


-- 
Best Regards,
  Yi Zhang




More information about the Linux-nvme mailing list