WARNING triggers at blk_mq_update_nr_hw_queues during nvme_reset_work

Gabriel Krisman Bertazi krisman at collabora.co.uk
Tue May 30 10:00:44 PDT 2017


Hi Keith,

Since the merge window for 4.12, one of the machines in Intel's CI
started to hit the WARN_ON below at blk_mq_update_nr_hw_queues during an
nvme_reset_work.  The issue persists with the latest 4.12-rc3, and full
dmesg from boot, up to the moment where the WARN_ON triggers is
available at the following link:

https://intel-gfx-ci.01.org/CI/CI_DRM_2672/fi-kbl-7500u/igt@kms_pipe_crc_basic@suspend-read-crc-pipe-a.html

Please notice that the test we do in the CI involves putting the
machine to sleep (PM), and the issue triggers when resuming execution.

I have not been able to get my hands on the machine yet to do an actual
bisect, but I'm wondering if you guys might have an idea of what is
wrong.

Any help is appreciated :)

[  382.419309] ------------[ cut here ]------------
[  382.419314] WARNING: CPU: 3 PID: 3098 at block/blk-mq.c:2648 blk_mq_update_nr_hw_queues+0x118/0x120
[  382.419315] Modules linked in: vgem snd_hda_codec_hdmi
snd_hda_codec_realtek snd_hda_codec_generic i915 x86_pkg_temp_thermal
intel_powerclamp coretemp crct10dif_pclmul crc32_pclmul
ghash_clmulni_intel snd_hda_intel snd_hda_codec snd_hwdep snd_hda_core
snd_pcm e1000e mei_me mei ptp pps_core prime_numbers
pinctrl_sunrisepoint
pinctrl_intel i2c_hid
[  382.419345] CPU: 3 PID: 3098 Comm: kworker/u8:5 Tainted: G     U  W  4.12.0-rc3-CI-CI_DRM_2672+ #1
[  382.419346] Hardware name: GIGABYTE GB-BKi7(H)A-7500/MFLP7AP-00, BIOSF4 02/20/2017
[  382.419349] Workqueue: nvme nvme_reset_work
[  382.419351] task: ffff88025e2f4f40 task.stack: ffffc90000464000
[  382.419353] RIP: 0010:blk_mq_update_nr_hw_queues+0x118/0x120
[  382.419355] RSP: 0000:ffffc90000467d50 EFLAGS: 00010246
[  382.419357] RAX: 0000000000000000 RBX: 0000000000000004 RCX:0000000000000001
[  382.419358] RDX: 0000000000000000 RSI: 00000000ffffffff RDI:ffff8802618d80b0
[  382.419359] RBP: ffffc90000467d70 R08: ffff88025e2f5778 R09:0000000000000000
[  382.419361] R10: 00000000ef6f2e9b R11: 0000000000000001 R12:ffff8802618d8368
[  382.419362] R13: ffff8802618d8010 R14: ffff8802618d81f0 R15:0000000000000000
[  382.419363] FS:  0000000000000000(0000) GS:ffff88026dd80000(0000) knlGS:0000000000000000
[  382.419364] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  382.419366] CR2: 0000000000000000 CR3: 000000025a06e000 CR4: 00000000003406e0
[  382.419367] Call Trace:
[  382.419370]  nvme_reset_work+0x948/0xff0
[  382.419374]  ? lock_acquire+0xb5/0x210
[  382.419379]  process_one_work+0x1fe/0x670
[  382.419390]  ? kthread_create_on_node+0x40/0x40
[  382.419394]  ret_from_fork+0x27/0x40
[  382.419398] Code: 48 8d 98 58 f6 ff ff 75 e5 5b 41 5c 41 5d 41 5e 5d
c3 48 8d bf a0 00 00 00 be ff ff ff ff e8 c0 48 ca ff 85 c0 0f 85 06 ff
ff ff <0f> ff e9 ff fe ff ff 90 55 31 f6 48 c7 c7 80 b2 ea 81 48 89 e5
[  382.419463] ---[ end trace 603ee21a3184ac90 ]---

Thanks,

-- 
Gabriel Krisman Bertazi



More information about the Linux-nvme mailing list