[bug report] WARNING: CPU: 3 PID: 522 at block/genhd.c:144 bdev_count_inflight_rw+0x26e/0x410
Breno Leitao
leitao at debian.org
Tue Jun 10 09:05:10 PDT 2025
On Tue, Jun 10, 2025 at 10:07:47AM +0800, Yu Kuai wrote:
> Hi,
>
> 在 2025/06/09 17:14, Breno Leitao 写道:
> > On Fri, Jun 06, 2025 at 11:31:06AM +0800, Yi Zhang wrote:
> > > Hello
> > >
> > > The following WARNING was triggered by blktests nvme/fc nvme/012,
> > > please help check and let me know if you need any info/test, thanks.
> > >
> > > commit: linux-block: 38f4878b9463 (HEAD, origin/for-next) Merge branch
> > > 'block-6.16' into for-next
> >
> > I am seeing a similar issue on Linus' recent tree as e271ed52b344
> > ("Merge tag 'pm-6.16-rc1-3' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm").
> > CCing Jens.
> >
> > This is my stack, in case it is useful.
> >
> > WARNING: CPU: 33 PID: 1865 at block/genhd.c:146 bdev_count_inflight_rw+0x334/0x3b0
> > Modules linked in: sch_fq(E) tls(E) act_gact(E) tcp_diag(E) inet_diag(E) cls_bpf(E) intel_uncore_frequency(E) intel_uncore_frequency_common(E) skx_edac(E) skx_edac_common(E) nfit(E) libnvdimm(E) x86_pkg_temp_thermal(E) intel_powerclamp(E) coretemp(E) kvm_intel(E) kvm(E) mlx5_ib(E) iTCO_wdt(E) iTCO_vendor_support(E) xhci_pci(E) evdev(E) irqbypass(E) acpi_cpufreq(E) ib_uverbs(E) ipmi_si(E) i2c_i801(E) xhci_hcd(E) i2c_smbus(E) ipmi_devintf(E) wmi(E) ipmi_msghandler(E) button(E) sch_fq_codel(E) vhost_net(E) tun(E) vhost(E) vhost_iotlb(E) tap(E) mpls_gso(E) mpls_iptunnel(E) mpls_router(E) fou(E) loop(E) drm(E) backlight(E) drm_panel_orientation_quirks(E) autofs4(E) efivarfs(E)
> > CPU: 33 UID: 0 PID: 1865 Comm: kworker/u144:14 Kdump: loaded Tainted: G S E N 6.15.0-0_fbk701_debugnightly_rc0_upstream_12426_ge271ed52b344 #1 PREEMPT(undef)
> > Tainted: [S]=CPU_OUT_OF_SPEC, [E]=UNSIGNED_MODULE, [N]=TEST
> > Hardware name: Quanta Twin Lakes MP/Twin Lakes Passive MP, BIOS F09_3A23 12/08/2020
> > Workqueue: writeback wb_workfn (flush-btrfs-1)
> > RIP: 0010:bdev_count_inflight_rw+0x334/0x3b0
> > Code: 75 5c 41 83 3f 00 78 22 48 83 c4 40 5b 41 5c 41 5d 41 5e 41 5f 5d c3 0f 0b 41 0f b6 06 84 c0 75 54 41 c7 07 00 00 00 00 eb bb <0f> 0b 48 b8 00 00 00 00 00 fc ff df 0f b6 04 03 84 c0 75 4e 41 c7
> > RSP: 0018:ffff8882ed786f20 EFLAGS: 00010286
> > RAX: 0000000000000000 RBX: 1ffff1105daf0df3 RCX: ffffffff829739f7
> > RDX: 0000000000000024 RSI: 0000000000000024 RDI: ffffffff853f79f8
> > RBP: 0000606f9ff42610 R08: ffffe8ffffd866a7 R09: 1ffffd1ffffb0cd4
> > R10: dffffc0000000000 R11: fffff91ffffb0cd5 R12: 0000000000000024
> > R13: 1ffffffff0dd0120 R14: ffffed105daf0df3 R15: ffff8882ed786f9c
> > FS: 0000000000000000(0000) GS:ffff88905fd44000(0000) knlGS:0000000000000000
> > CR2: 00007f904bc6d008 CR3: 0000001075c2b001 CR4: 00000000007726f0
> > PKRU: 55555554
> > Call Trace:
> > <TASK>
> > bdev_count_inflight+0x28/0x50
> > update_io_ticks+0x10f/0x1b0
> > blk_account_io_start+0x3a0/0x690
> > blk_mq_submit_bio+0xc7e/0x1940
>
> So, this is blk-mq IO accounting, a different problem than nvme mpath.
>
> What kind of test you're running, can you reporduce ths problem? I don't
> have a clue yet after a quick code review.
I have a bunch of machines running some Meta prod workloads on them,
and this one was running a webserver.
Unfortunately I don't have a reproducer.
More information about the Linux-nvme
mailing list