blktests failures with v6.12-rc1 kernel

Shinichiro Kawasaki shinichiro.kawasaki at wdc.com
Thu Oct 3 19:40:20 PDT 2024


On Oct 03, 2024 / 13:56, Bart Van Assche wrote:
> On 10/3/24 1:02 AM, Shinichiro Kawasaki wrote:
> > #3: srp/001,002,011,012,013,014,016
> > 
> >     The seven test cases in srp test group failed due to the WARN
> >     "kmem_cache of name 'srpt-rsp-buf' already exists" [4]. The failures are
> >     recreated in stable manner. They need further debug effort.
> 
> Does the patch below help?

Thanks Bart, but unfortunately, still the test cases fail with the message
below. I also noticed that similar WARN for 'srpt-req-buf' is observed. This
problem apply to both 'srpt-rsp-buf' and 'srpt-req-buf', probably.

------------[ cut here ]------------
kmem_cache of name 'srpt-rsp-buf-fec0:0000:0000:0000:5054:00ff:fe12:3456-ens3_siw-1' already exists
WARNING: CPU: 0 PID: 47 at mm/slab_common.c:107 __kmem_cache_create_args+0xa3/0x300
Modules linked in: ib_srp scsi_transport_srp target_core_user target_core_pscsi target_core_file ib_srpt target_core_iblock target_core_mod rdma_cm iw_cm ib_cm ib_umad scsi_debug dm_service_time siw ib_uverbs null_blk ib_core nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 ip_set nf_tables qrtr sunrpc 9pnet_virtio ppdev 9pnet netfs e1000 i2c_piix4 parport_pc pcspkr parport i2c_smbus fuse loop nfnetlink zram bochs drm_vram_helper drm_ttm_helper ttm drm_kms_helper xfs nvme drm floppy nvme_core sym53c8xx scsi_transport_spi nvme_auth serio_raw ata_generic pata_acpi dm_multipath qemu_fw_cfg
CPU: 0 UID: 0 PID: 47 Comm: kworker/u16:2 Not tainted 6.12.0-rc1+ #335
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-2.fc40 04/01/2014
Workqueue: iw_cm_wq cm_work_handler [iw_cm]
RIP: 0010:__kmem_cache_create_args+0xa3/0x300
Code: 8d 58 98 48 3d d0 a7 25 b2 74 21 48 8b 7b 60 48 89 ee e8 30 cd 06 02 85 c0 75 e0 48 89 ee 48 c7 c7 d0 db b0 b1 e8 dd 92 82 ff <0f> 0b be 20 00 00 00 48 89 ef e8 8e cd 06 02 48 85 c0 0f 85 02 02
RSP: 0018:ffff88810135f508 EFLAGS: 00010292
RAX: 0000000000000000 RBX: ffff888100289400 RCX: 0000000000000000
RDX: 0000000000000000 RSI: ffffffffb11bea60 RDI: 0000000000000001
RBP: ffff8881144bbb00 R08: 0000000000000001 R09: ffffed102026be4b
R10: ffff88810135f25f R11: 0000000000000001 R12: 0000000000000100
R13: ffff88810135f6c8 R14: 0000000000000000 R15: 0000000000000000
FS:  0000000000000000(0000) GS:ffff8883ae000000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f4f8d878c58 CR3: 00000001376da000 CR4: 00000000000006f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <TASK>
 ? __warn.cold+0x5f/0x1f8
 ? __kmem_cache_create_args+0xa3/0x300
 ? report_bug+0x1ec/0x390
 ? handle_bug+0x58/0x90
 ? exc_invalid_op+0x13/0x40
 ? asm_exc_invalid_op+0x16/0x20
 ? __kmem_cache_create_args+0xa3/0x300
 ? __kmem_cache_create_args+0xa3/0x300
 srpt_cm_req_recv.cold+0x12e0/0x46a4 [ib_srpt]
 ? vsnprintf+0x38b/0x18f0
 ? __pfx_vsnprintf+0x10/0x10
 ? __pfx_srpt_cm_req_recv+0x10/0x10 [ib_srpt]
 ? snprintf+0xa5/0xe0
 ? __pfx_snprintf+0x10/0x10
 ? lock_release+0x460/0x7a0
 srpt_rdma_cm_req_recv+0x35d/0x460 [ib_srpt]
 ? __pfx_srpt_rdma_cm_req_recv+0x10/0x10 [ib_srpt]
 ? rcu_is_watching+0x11/0xb0
 ? trace_cm_event_handler+0xf5/0x140 [rdma_cm]
 cma_cm_event_handler+0x88/0x210 [rdma_cm]
 iw_conn_req_handler+0x7a8/0xf10 [rdma_cm]
 ? __pfx_iw_conn_req_handler+0x10/0x10 [rdma_cm]
 ? alloc_work_entries+0x12f/0x260 [iw_cm]
 cm_work_handler+0x143f/0x1ba0 [iw_cm]
 ? __pfx_cm_work_handler+0x10/0x10 [iw_cm]
 ? process_one_work+0x7de/0x1460
 ? lock_acquire+0x2d/0xc0
 ? process_one_work+0x7de/0x1460
 process_one_work+0x85a/0x1460
 ? __pfx_lock_acquire.part.0+0x10/0x10
 ? __pfx_process_one_work+0x10/0x10
 ? assign_work+0x16c/0x240
 ? lock_is_held_type+0xd5/0x130
 worker_thread+0x5e2/0xfc0
 ? __pfx_worker_thread+0x10/0x10
 kthread+0x2d1/0x3a0
 ? _raw_spin_unlock_irq+0x24/0x50
 ? __pfx_kthread+0x10/0x10
 ret_from_fork+0x30/0x70
 ? __pfx_kthread+0x10/0x10
 ret_from_fork_asm+0x1a/0x30
 </TASK>
irq event stamp: 53809
hardirqs last  enabled at (53823): [<ffffffffae3d59ce>] __up_console_sem+0x5e/0x70
hardirqs last disabled at (53834): [<ffffffffae3d59b3>] __up_console_sem+0x43/0x70
softirqs last  enabled at (53864): [<ffffffffae2277ab>] __irq_exit_rcu+0xbb/0x1c0
softirqs last disabled at (53843): [<ffffffffae2277ab>] __irq_exit_rcu+0xbb/0x1c0
---[ end trace 0000000000000000 ]---
ib_srpt:srpt_cm_req_recv: ib_srpt imm_data_offset = 68
------------[ cut here ]------------



More information about the Linux-nvme mailing list