[bug report] blktests nvme/061 hang with rdma transport and siw driver

Bernard Metzler BMT at zurich.ibm.com
Tue Apr 15 08:18:36 PDT 2025



> -----Original Message-----
> From: Shinichiro Kawasaki <shinichiro.kawasaki at wdc.com>
> Sent: Tuesday, April 15, 2025 1:13 PM
> To: linux-nvme at lists.infradead.org; linux-rdma at vger.kernel.org
> Cc: Daniel Wagner <wagi at kernel.org>
> Subject: [EXTERNAL] [bug report] blktests nvme/061 hang with rdma transport
> and siw driver
> 
> Hello all,
> 
> Recently, a new blktests test case nvme/061 was introduced. It does "test
> fabric
> target teardown and setup during I/O". When I run this test case repeatedly
> with
> rdma transport and siw driver on the kernel v6.15-rc2, KASAN slab-use-
> after-free
> happens in __pwq_activate_work() [1], and then the test system hangs. The
> hang
> is recreated in stable manner.
> 
> It looks the new test case revealed a hidden problem. I observed the same
> hang
> with a few older kernels v6.14 and v6.13. Then the problem has been
> existing for
> a while.
> 
> Actions for fix will be appreciated. I'm willing to run tests with debug
> patches
> for fix candidate patches.
> 


Hi Shinichiro,

I was running 'USE_SIW=1 NVMET_TRTYPES=rdma ./check nvme/061'

...hundreds of times in a row w/o problems so far. KASAN checks
are enabled. Did you do something differently?

At first glance, to me it looks like a problem in the iwcm code,
where a cmid's work queue handling might be broken.

Do you have access to real iWarp hardware to test? I tested
with iw_cxgb4 w/o problems.

Thanks,
Bernard.
> 
> [1]
> 
> [77516.128920][T163063] run blktests nvme/061 at 2025-04-15 10:50:44
> [77516.243039][T163125] loop0: detected capacity change from 0 to 2097152
> [77516.255638][T163128] nvmet: adding nsid 1 to subsystem blktests-
> subsystem-1
> [77516.271505][T163132] nvmet_rdma: enabling port 0 (10.0.2.15:4420)
> [77516.315953][T163139] nvme nvme1: rdma connection establishment failed (-
> 512)
> [77516.338654][T157654] nvmet: Created nvm controller 1 for subsystem
> blktests-subsystem-1 for NQN nqn.2014-08.org.nvmexpress:uuid:0f01fb42-9f7f-
> 4856-b0b3-51e60b8de349.
> [77516.348375][T163139] nvme nvme1: Please enable CONFIG_NVME_MULTIPATH for
> full support of multi-port devices.
> [77516.352460][T163139] nvme nvme1: creating 4 I/O queues.
> [77516.388708][T163139] nvme nvme1: mapped 4/0/0 default/read/poll queues.
> [77516.393808][T163139] nvme nvme1: new ctrl: NQN "blktests-subsystem-1",
> addr 10.0.2.15:4420, hostnqn: nqn.2014-08.org.nvmexpress:uuid:0f01fb42-
> 9f7f-4856-b0b3-51e60b8de349
> [77517.490278][T147091] nvmet_rdma: post_recv cmd failed
> [77517.490278][    C0] nvme nvme1: RECV for CQE 0x0000000033c0c31a failed
> with status WR flushed (5)
> [77517.490287][    C0] nvme nvme1: starting error recovery
> [77517.490357][T163171] nvmet_rdma: received IB QP event: send queue
> drained (5)
> [77517.490539][T147091] nvmet_rdma: sending cmd response failed
> [77517.521129][T147416] nvme nvme1: Reconnecting in 1 seconds...
> [77517.577846][T163189] loop1: detected capacity change from 0 to 2097152
> [77517.588566][T163192] nvmet: adding nsid 1 to subsystem blktests-
> subsystem-1
> [77517.598937][T163196] nvmet_rdma: enabling port 0 (10.0.2.15:4420)
> [77518.536807][T157654] nvmet: Created nvm controller 1 for subsystem
> blktests-subsystem-1 for NQN nqn.2014-08.org.nvmexpress:uuid:0f01fb42-9f7f-
> 4856-b0b3-51e60b8de349.
> [77518.544349][T147416] nvme nvme1: Please enable CONFIG_NVME_MULTIPATH for
> full support of multi-port devices.
> [77518.548378][T147416] nvme nvme1: creating 4 I/O queues.
> [77518.599549][T147416] nvme nvme1: mapped 4/0/0 default/read/poll queues.
> [77518.605257][T147416] nvme nvme1: Successfully reconnected (1 attempts)
> [77518.656899][T147416] nvmet_rdma: post_recv cmd failed
> [77518.656913][    C0] nvme nvme1: RECV for CQE 0x0000000069d8d80d failed
> with status WR flushed (5)
> [77518.657190][T147416] nvmet_rdma: sending cmd response failed
> [77518.657576][    C0] nvme nvme1: starting error recovery
> [77518.657642][T163212] nvmet_rdma: received IB QP event: send queue
> drained (5)
> [77518.679562][T147414] nvme nvme1: Reconnecting in 1 seconds...
> [77518.806843][T163230] loop2: detected capacity change from 0 to 2097152
> [77518.824732][T163233] nvmet: adding nsid 1 to subsystem blktests-
> subsystem-1
> [77518.840860][T163237] nvmet_rdma: enabling port 0 (10.0.2.15:4420)
> [77519.690812][T157654] nvmet: Created nvm controller 1 for subsystem
> blktests-subsystem-1 for NQN nqn.2014-08.org.nvmexpress:uuid:0f01fb42-9f7f-
> 4856-b0b3-51e60b8de349.
> [77519.698371][T147413] nvme nvme1: Please enable CONFIG_NVME_MULTIPATH for
> full support of multi-port devices.
> [77519.701969][T147413] nvme nvme1: creating 4 I/O queues.
> [77519.756714][T147413] nvme nvme1: mapped 4/0/0 default/read/poll queues.
> [77519.763289][T147413] nvme nvme1: Successfully reconnected (1 attempts)
> [77519.931468][    C0] nvme nvme1: RECV for CQE 0x000000001546cc5d failed
> with status WR flushed (5)
> [77519.931495][T147414] nvmet_rdma: post_recv cmd failed
> [77519.931918][    C0] nvme nvme1: starting error recovery
> [77519.932315][T147414] nvmet_rdma: sending cmd response failed
> [77519.934507][T163096] nvmet_rdma: received IB QP event: QP fatal error
> (1)
> [77519.957634][T147091] nvme nvme1: Reconnecting in 1 seconds...
> [77520.014605][T163271] loop3: detected capacity change from 0 to 2097152
> [77520.036545][T163274] nvmet: adding nsid 1 to subsystem blktests-
> subsystem-1
> [77520.059189][T163278] nvmet_rdma: enabling port 0 (10.0.2.15:4420)
> [77520.970973][T157654] nvmet: Created nvm controller 1 for subsystem
> blktests-subsystem-1 for NQN nqn.2014-08.org.nvmexpress:uuid:0f01fb42-9f7f-
> 4856-b0b3-51e60b8de349.
> [77520.979516][T147091] nvme nvme1: Please enable CONFIG_NVME_MULTIPATH for
> full support of multi-port devices.
> [77520.983088][T147091] nvme nvme1: creating 4 I/O queues.
> [77521.051099][T147091] nvme nvme1: mapped 4/0/0 default/read/poll queues.
> [77521.056904][T147091] nvme nvme1: Successfully reconnected (1 attempts)
> [77521.139591][    C3] nvme nvme1: RECV for CQE 0x00000000315e08be failed
> with status WR flushed (5)
> [77521.139635][T146462] nvmet_rdma: post_recv cmd failed
> [77521.139989][    C3] nvme nvme1: starting error recovery
> [77521.140221][T146462] nvmet_rdma: sending cmd response failed
> [77521.142903][T163094] nvmet_rdma: received IB QP event: QP fatal error
> (1)
> [77521.144318][T147091]
> ==================================================================
> [77521.145465][T147091] BUG: KASAN: slab-use-after-free in
> __pwq_activate_work+0x1ff/0x250
> [77521.146488][T147091] Read of size 8 at addr ffff88811f9cf800 by task
> kworker/u16:1/147091
> [77521.147569][T147091]
> [77521.148233][T147091] CPU: 2 UID: 0 PID: 147091 Comm: kworker/u16:1 Not
> tainted 6.15.0-rc2+ #27 PREEMPT(voluntary)
> [77521.148238][T147091] Hardware name: QEMU Standard PC (i440FX + PIIX,
> 1996), BIOS 1.16.3-3.fc41 04/01/2014
> [77521.148240][T147091] Workqueue:  0x0 (iw_cm_wq)
> [77521.148248][T147091] Call Trace:
> [77521.148250][T147091]  <TASK>
> [77521.148252][T147091]  dump_stack_lvl+0x6a/0x90
> [77521.148257][T147091]  print_report+0x174/0x554
> [77521.148262][T147091]  ? __virt_addr_valid+0x208/0x430
> [77521.148265][T147091]  ? __pwq_activate_work+0x1ff/0x250
> [77521.148268][T147091]  kasan_report+0xae/0x170
> [77521.148272][T147091]  ? __pwq_activate_work+0x1ff/0x250
> [77521.148275][T147091]  __pwq_activate_work+0x1ff/0x250
> [77521.148278][T147091]  pwq_dec_nr_in_flight+0x8c5/0xfb0
> [77521.148282][T147091]  process_one_work+0xc11/0x1460
> [77521.148286][T147091]  ? __pfx_process_one_work+0x10/0x10
> [77521.148293][T147091]  ? assign_work+0x16c/0x240
> [77521.148296][T147091]  worker_thread+0x5ef/0xfd0
> [77521.148300][T147091]  ? __pfx_worker_thread+0x10/0x10
> [77521.148302][T147091]  kthread+0x3b0/0x770
> [77521.148306][T147091]  ? __pfx_kthread+0x10/0x10
> [77521.148308][T147091]  ? rcu_is_watching+0x11/0xb0
> [77521.148312][T147091]  ? _raw_spin_unlock_irq+0x24/0x50
> [77521.148315][T147091]  ? rcu_is_watching+0x11/0xb0
> [77521.148317][T147091]  ? __pfx_kthread+0x10/0x10
> [77521.148319][T147091]  ret_from_fork+0x30/0x70
> [77521.148322][T147091]  ? __pfx_kthread+0x10/0x10
> [77521.148324][T147091]  ret_from_fork_asm+0x1a/0x30
> [77521.148328][T147091]  </TASK>
> [77521.148329][T147091]
> [77521.170936][T147091] Allocated by task 147416:
> [77521.171538][T147091]  kasan_save_stack+0x2c/0x50
> [77521.172209][T147091]  kasan_save_track+0x10/0x30
> [77521.172894][T147091]  __kasan_kmalloc+0xa6/0xb0
> [77521.173582][T147091]  alloc_work_entries+0xa9/0x260 [iw_cm]
> [77521.174314][T147091]  iw_cm_connect+0x23/0x4a0 [iw_cm]
> [77521.175005][T147091]  rdma_connect_locked+0xbfd/0x1920 [rdma_cm]
> [77521.175770][T147091]  nvme_rdma_cm_handler+0x8e5/0x1b60 [nvme_rdma]
> [77521.176475][T147091]  cma_cm_event_handler+0xae/0x320 [rdma_cm]
> [77521.177132][T147091]  cma_work_handler+0x106/0x1b0 [rdma_cm]
> [77521.177796][T147091]  process_one_work+0x84f/0x1460
> [77521.178410][T147091]  worker_thread+0x5ef/0xfd0
> [77521.179023][T147091]  kthread+0x3b0/0x770
> [77521.179593][T147091]  ret_from_fork+0x30/0x70
> [77521.180163][T147091]  ret_from_fork_asm+0x1a/0x30
> [77521.180750][T147091]
> [77521.181177][T147091] Freed by task 147091:
> [77521.181663][T147091]  kasan_save_stack+0x2c/0x50
> [77521.182178][T147091]  kasan_save_track+0x10/0x30
> [77521.182728][T147091]  kasan_save_free_info+0x37/0x60
> [77521.183336][T147091]  __kasan_slab_free+0x4b/0x70
> [77521.183921][T147091]  kfree+0x13a/0x4b0
> [77521.184440][T147091]  dealloc_work_entries+0x125/0x1f0 [iw_cm]
> [77521.185114][T147091]  iwcm_deref_id+0x6f/0xa0 [iw_cm]
> [77521.185691][T147091]  cm_work_handler+0x136/0x1ba0 [iw_cm]
> [77521.186280][T147091]  process_one_work+0x84f/0x1460
> [77521.186844][T147091]  worker_thread+0x5ef/0xfd0
> [77521.187390][T147091]  kthread+0x3b0/0x770
> [77521.187905][T147091]  ret_from_fork+0x30/0x70
> [77521.188452][T147091]  ret_from_fork_asm+0x1a/0x30
> [77521.189026][T147091]
> [77521.189431][T147091] Last potentially related work creation:
> [77521.190064][T147091]  kasan_save_stack+0x2c/0x50
> [77521.190628][T147091]  kasan_record_aux_stack+0xa3/0xb0
> [77521.191227][T147091]  __queue_work+0x2ff/0x1390
> [77521.191787][T147091]  queue_work_on+0x67/0xc0
> [77521.192327][T147091]  cm_event_handler+0x46a/0x820 [iw_cm]
> [77521.192966][T147091]  siw_cm_upcall+0x330/0x650 [siw]
> [77521.193552][T147091]  siw_cm_work_handler+0x6b9/0x2b20 [siw]
> [77521.194189][T147091]  process_one_work+0x84f/0x1460
> [77521.194756][T147091]  worker_thread+0x5ef/0xfd0
> [77521.195306][T147091]  kthread+0x3b0/0x770
> [77521.195817][T147091]  ret_from_fork+0x30/0x70
> [77521.196345][T147091]  ret_from_fork_asm+0x1a/0x30
> [77521.196900][T147091]
> [77521.197279][T147091] The buggy address belongs to the object at
> ffff88811f9cf800
> [77521.197279][T147091]  which belongs to the cache kmalloc-512 of size 512
> [77521.198740][T147091] The buggy address is located 0 bytes inside of
> [77521.198740][T147091]  freed 512-byte region [ffff88811f9cf800,
> ffff88811f9cfa00)
> [77521.200184][T147091]
> [77521.200589][T147091] The buggy address belongs to the physical page:
> [77521.201294][T147091] page: refcount:0 mapcount:0
> mapping:0000000000000000 index:0x0 pfn:0x11f9cc
> [77521.202199][T147091] head: order:2 mapcount:0 entire_mapcount:0
> nr_pages_mapped:0 pincount:0
> [77521.203086][T147091] flags:
> 0x17ffffc0000040(head|node=0|zone=2|lastcpupid=0x1fffff)
> [77521.203902][T147091] page_type: f5(slab)
> [77521.204444][T147091] raw: 0017ffffc0000040 ffff888100042c80
> ffffea0004899d00 dead000000000002
> [77521.205304][T147091] raw: 0000000000000000 0000000000100010
> 00000000f5000000 0000000000000000
> [77521.206194][T147091] head: 0017ffffc0000040 ffff888100042c80
> ffffea0004899d00 dead000000000002
> [77521.207075][T147091] head: 0000000000000000 0000000000100010
> 00000000f5000000 0000000000000000
> [77521.207949][T147091] head: 0017ffffc0000002 ffffea00047e7301
> 00000000ffffffff 00000000ffffffff
> [77521.208836][T147091] head: ffffffffffffffff 0000000000000000
> 00000000ffffffff 0000000000000004
> [77521.209713][T147091] page dumped because: kasan: bad access detected
> [77521.210439][T147091]
> [77521.210898][T147091] Memory state around the buggy address:
> [77521.211578][T147091]  ffff88811f9cf700: fc fc fc fc fc fc fc fc fc fc fc
> fc fc fc fc fc
> [77521.212430][T147091]  ffff88811f9cf780: fc fc fc fc fc fc fc fc fc fc fc
> fc fc fc fc fc
> [77521.213295][T147091] >ffff88811f9cf800: fa fb fb fb fb fb fb fb fb fb fb
> fb fb fb fb fb
> [77521.214166][T147091]                    ^
> [77521.214706][T147091]  ffff88811f9cf880: fb fb fb fb fb fb fb fb fb fb fb
> fb fb fb fb fb
> [77521.215495][T147091]  ffff88811f9cf900: fb fb fb fb fb fb fb fb fb fb fb
> fb fb fb fb fb
> [77521.216433][T147091]
> ==================================================================
> [77521.217386][T147091] ------------[ cut here ]------------
> [77521.218160][T147091] WARNING: CPU: 2 PID: 147091 at
> kernel/workqueue.c:1678 __pwq_activate_work+0x1f0/0x250
> [77521.219196][T147091] Modules linked in: siw ib_uverbs nvmet_rdma nvmet
> nvme_rdma nvme_fabrics ib_umad dm_service_time rtrs_core rdma_cm nbd iw_cm
> ib_cm ib_core nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib
> nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct
> nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 ip_set
> nf_tables qrtr sunrpc ppdev 9pnet_virtio 9pnet netfs parport_pc pcspkr
> e1000 i2c_piix4 parport i2c_smbus fuse loop dm_multipath nfnetlink
> vsock_loopback vmw_vsock_virtio_transport_common vmw_vsock_vmci_transport
> vsock zram vmw_vmci xfs nvme bochs drm_client_lib nvme_core
> drm_shmem_helper sym53c8xx drm_kms_helper nvme_keyring scsi_transport_spi
> drm floppy nvme_auth serio_raw ata_generic pata_acpi qemu_fw_cfg [last
> unloaded: ib_uverbs]
> [77521.225863][T147091] CPU: 2 UID: 0 PID: 147091 Comm: kworker/u16:1
> Tainted: G    B               6.15.0-rc2+ #27 PREEMPT(voluntary)
> [77521.227138][T147091] Tainted: [B]=BAD_PAGE
> [77521.227846][T147091] Hardware name: QEMU Standard PC (i440FX + PIIX,
> 1996), BIOS 1.16.3-3.fc41 04/01/2014
> [77521.228953][T147091] Workqueue:  0x0 (iw_cm_wq)
> [77521.229761][T147091] RIP: 0010:__pwq_activate_work+0x1f0/0x250
> [77521.230650][T147091] Code: 48 b8 00 00 00 00 00 fc ff df 48 89 fa 48 c1
> ea 03 80 3c 02 00 75 64 4d 89 6c 24 50 4c 8b 65 00 49 8d 74 24 60 e9 ca fe
> ff ff <0f> 0b e9 42 fe ff ff 48 89 f7 e8 d1 d7 93 00 e9 2c fe ff ff 48 89
> [77521.233126][T147091] RSP: 0018:ffff88810740fbc0 EFLAGS: 00010046
> [77521.234031][T147091] RAX: 0000000000000001 RBX: ffff88811f9cf800 RCX:
> ffffffffa84fcdc6
> [77521.235155][T147091] RDX: 0000000000000001 RSI: 0000000000000008 RDI:
> ffffffffadc7f4e0
> [77521.236241][T147091] RBP: ffff888134e1cc00 R08: 0000000000000001 R09:
> fffffbfff5b8fe9c
> [77521.237314][T147091] R10: ffffffffadc7f4e7 R11: 0000000000300e30 R12:
> 0000000000000001
> [77521.238386][T147091] R13: ffff888137f0a080 R14: ffff88810016e000 R15:
> 0000000000000007
> [77521.239436][T147091] FS:  0000000000000000(0000)
> GS:ffff88840053f000(0000) knlGS:0000000000000000
> [77521.240562][T147091] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [77521.241532][T147091] CR2: 00007f2c865bf000 CR3: 0000000131d92000 CR4:
> 00000000000006f0
> [77521.242600][T147091] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
> 0000000000000000
> [77521.243655][T147091] DR3: 0000000000000000 DR6: 00000000ffff07f0 DR7:
> 0000000000000400
> [77521.244703][T147091] Call Trace:
> [77521.245400][T147091]  <TASK>
> [77521.246092][T147091]  pwq_dec_nr_in_flight+0x8c5/0xfb0
> [77521.246930][T147091]  process_one_work+0xc11/0x1460
> [77521.247754][T147091]  ? __pfx_process_one_work+0x10/0x10
> [77521.248569][T147091]  ? assign_work+0x16c/0x240
> [77521.249360][T147091]  worker_thread+0x5ef/0xfd0
> [77521.250187][T147091]  ? __pfx_worker_thread+0x10/0x10
> [77521.251021][T147091]  kthread+0x3b0/0x770
> [77521.251768][T147091]  ? __pfx_kthread+0x10/0x10
> [77521.252549][T147091]  ? rcu_is_watching+0x11/0xb0
> [77521.253333][T147091]  ? _raw_spin_unlock_irq+0x24/0x50
> [77521.254152][T147091]  ? rcu_is_watching+0x11/0xb0
> [77521.254922][T147091]  ? __pfx_kthread+0x10/0x10
> [77521.255686][T147091]  ret_from_fork+0x30/0x70
> [77521.256441][T147091]  ? __pfx_kthread+0x10/0x10
> [77521.257204][T147091]  ret_from_fork_asm+0x1a/0x30
> [77521.257964][T147091]  </TASK>
> [77521.258600][T147091] irq event stamp: 0
> [77521.259306][T147091] hardirqs last  enabled at (0): [<0000000000000000>]
> 0x0
> [77521.260245][T147091] hardirqs last disabled at (0): [<ffffffffa84f3b3f>]
> copy_process+0x1f3f/0x87d0
> [77521.261371][T147091] softirqs last  enabled at (0): [<ffffffffa84f3ba4>]
> copy_process+0x1fa4/0x87d0
> [77521.262456][T147091] softirqs last disabled at (0): [<0000000000000000>]
> 0x0
> [77521.263367][T147091] ---[ end trace 0000000000000000 ]---
> [77521.264199][T147091] Oops: general protection fault, probably for non-
> canonical address 0xe049fc4da00047d3: 0000 [#1] SMP KASAN PTI
> [77521.265504][T147091] KASAN: maybe wild-memory-access in range
> [0x0250026d00023e98-0x0250026d00023e9f]
> [77521.266618][T147091] CPU: 2 UID: 0 PID: 147091 Comm: kworker/u16:1
> Tainted: G    B   W           6.15.0-rc2+ #27 PREEMPT(voluntary)
> [77521.267876][T147091] Tainted: [B]=BAD_PAGE, [W]=WARN
> [77521.268667][T147091] Hardware name: QEMU Standard PC (i440FX + PIIX,
> 1996), BIOS 1.16.3-3.fc41 04/01/2014
> [77521.269803][T147091] Workqueue:  0x0 (iw_cm_wq)
> [77521.270545][T147091] RIP:
> 0010:__list_del_entry_valid_or_report+0xaf/0x1d0
> [77521.271489][T147091] Code: 48 c1 ea 03 80 3c 02 00 0f 85 0e 01 00 00 48
> 39 5d 00 75 71 48 b8 00 00 00 00 00 fc ff df 49 8d 6c 24 08 48 89 ea 48 c1
> ea 03 <80> 3c 02 00 0f 85 db 00 00 00 49 3b 5c 24 08 0f 85 81 00 00 00 5b
> [77521.273666][T147091] RSP: 0018:ffff88810740fb48 EFLAGS: 00010002
> [77521.274497][T147091] RAX: dffffc0000000000 RBX: ffff88811f9cf808 RCX:
> 0000000000000000
> [77521.275520][T147091] RDX: 004a004da00047d3 RSI: 0000000000000008 RDI:
> ffff88810740fb10
> [77521.276479][T147091] RBP: 0250026d00023e9b R08: 0000000000000000 R09:
> fffffbfff598bf0c
> [77521.277439][T147091] R10: ffffffffacc5f867 R11: 0000000000300e30 R12:
> 0250026d00023e93
> [77521.278380][T147091] R13: 1ffff1102002dc0d R14: ffff88811f9cf808 R15:
> 0250026d00023e8b
> [77521.279336][T147091] FS:  0000000000000000(0000)
> GS:ffff88840053f000(0000) knlGS:0000000000000000
> [77521.280326][T147091] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [77521.281184][T147091] CR2: 00007f2c865bf000 CR3: 0000000131d92000 CR4:
> 00000000000006f0
> [77521.282153][T147091] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
> 0000000000000000
> [77521.283131][T147091] DR3: 0000000000000000 DR6: 00000000ffff07f0 DR7:
> 0000000000000400
> [77521.284098][T147091] Call Trace:
> [77521.284684][T147091]  <TASK>
> [77521.285238][T147091]  move_linked_works+0xa8/0x2a0
> [77521.285938][T147091]  __pwq_activate_work+0xc4/0x250
> [77521.286628][T147091]  pwq_dec_nr_in_flight+0x8c5/0xfb0
> [77521.287337][T147091]  process_one_work+0xc11/0x1460
> [77521.288036][T147091]  ? __pfx_process_one_work+0x10/0x10
> [77521.288758][T147091]  ? assign_work+0x16c/0x240
> [77521.289443][T147091]  worker_thread+0x5ef/0xfd0
> [77521.290111][T147091]  ? __pfx_worker_thread+0x10/0x10
> [77521.290817][T147091]  kthread+0x3b0/0x770
> [77521.291447][T147091]  ? __pfx_kthread+0x10/0x10
> [77521.292119][T147091]  ? rcu_is_watching+0x11/0xb0
> [77521.292796][T147091]  ? _raw_spin_unlock_irq+0x24/0x50
> [77521.293500][T147091]  ? rcu_is_watching+0x11/0xb0
> [77521.294177][T147091]  ? __pfx_kthread+0x10/0x10
> [77521.294836][T147091]  ret_from_fork+0x30/0x70
> [77521.295484][T147091]  ? __pfx_kthread+0x10/0x10
> [77521.296149][T147091]  ret_from_fork_asm+0x1a/0x30
> [77521.296816][T147091]  </TASK>
> [77521.297361][T147091] Modules linked in: siw ib_uverbs nvmet_rdma nvmet
> nvme_rdma nvme_fabrics ib_umad dm_service_time rtrs_core rdma_cm nbd iw_cm
> ib_cm ib_core nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib
> nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct
> nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 ip_set
> nf_tables qrtr sunrpc ppdev 9pnet_virtio 9pnet netfs parport_pc pcspkr
> e1000 i2c_piix4 parport i2c_smbus fuse loop dm_multipath nfnetlink
> vsock_loopback vmw_vsock_virtio_transport_common vmw_vsock_vmci_transport
> vsock zram vmw_vmci xfs nvme bochs drm_client_lib nvme_core
> drm_shmem_helper sym53c8xx drm_kms_helper nvme_keyring scsi_transport_spi
> drm floppy nvme_auth serio_raw ata_generic pata_acpi qemu_fw_cfg [last
> unloaded: ib_uverbs]
> [77521.304220][T147091] ---[ end trace 0000000000000000 ]---
> [77521.305044][T147091] RIP:
> 0010:__list_del_entry_valid_or_report+0xaf/0x1d0
> [77521.305969][T147091] Code: 48 c1 ea 03 80 3c 02 00 0f 85 0e 01 00 00 48
> 39 5d 00 75 71 48 b8 00 00 00 00 00 fc ff df 49 8d 6c 24 08 48 89 ea 48 c1
> ea 03 <80> 3c 02 00 0f 85 db 00 00 00 49 3b 5c 24 08 0f 85 81 00 00 00 5b
> [77521.308176][T147091] RSP: 0018:ffff88810740fb48 EFLAGS: 00010002
> [77521.309049][T147091] RAX: dffffc0000000000 RBX: ffff88811f9cf808 RCX:
> 0000000000000000
> [77521.310057][T147091] RDX: 004a004da00047d3 RSI: 0000000000000008 RDI:
> ffff88810740fb10
> [77521.311067][T147091] RBP: 0250026d00023e9b R08: 0000000000000000 R09:
> fffffbfff598bf0c
> [77521.312062][T147091] R10: ffffffffacc5f867 R11: 0000000000300e30 R12:
> 0250026d00023e93
> [77521.313038][T147091] R13: 1ffff1102002dc0d R14: ffff88811f9cf808 R15:
> 0250026d00023e8b
> [77521.314055][T147091] FS:  0000000000000000(0000)
> GS:ffff88840053f000(0000) knlGS:0000000000000000
> [77521.315149][T147091] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [77521.316112][T147091] CR2: 00007f2c865bf000 CR3: 0000000131d92000 CR4:
> 00000000000006f0
> [77521.317071][T147091] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
> 0000000000000000
> [77521.318100][T147091] DR3: 0000000000000000 DR6: 00000000ffff07f0 DR7:
> 0000000000000400
> [77521.319112][T147091] note: kworker/u16:1[147091] exited with irqs
> disabled
> [77521.320117][T147091] note: kworker/u16:1[147091] exited with
> preempt_count 2


More information about the Linux-nvme mailing list