[PATCH] NVME: call blk_integrity_unregister after queue is cleaned up

Ming Lei ming.lei at redhat.com
Wed Dec 6 02:30:09 PST 2017


During IO complete path, bio_integrity_advance() is often called, and
blk_get_integrity() is called in this function. But in
blk_integrity_unregister, the buffer pointed by queue->integrity
is cleared, and blk_integrity->profile becomes NULL, then blk_get_integrity
returns NULL, and causes kernel oops[1] finally.

This patch fixes this issue by calling blk_integrity_unregister() after
blk_cleanup_queue().

[1] kernel oops log
[  122.068007] BUG: unable to handle kernel NULL pointer dereference at 000000000000000a
[  122.076760] IP: bio_integrity_advance+0x3d/0xf0
[  122.081815] PGD 0 P4D 0
[  122.084641] Oops: 0000 [#1] SMP
[  122.088142] Modules linked in: sunrpc ipmi_ssif intel_rapl vfat fat x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm irqbypass mei_me ipmi_si crct10dif_pclmul crc32_pclmul sg mei ghash_clmulni_intel mxm_wmi ipmi_devintf iTCO_wdt intel_cstate intel_uncore pcspkr intel_rapl_perf iTCO_vendor_support dcdbas ipmi_msghandler lpc_ich acpi_power_meter shpchp wmi dm_multipath ip_tables xfs libcrc32c sd_mod mgag200 i2c_algo_bit drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm crc32c_intel ahci nvme tg3 libahci nvme_core i2c_core libata ptp megaraid_sas pps_core dm_mirror dm_region_hash dm_log dm_mod
[  122.149577] CPU: 1 PID: 0 Comm: swapper/1 Not tainted 4.14.0-11.el7a.x86_64 #1
[  122.157635] Hardware name: Dell Inc. PowerEdge R730xd/072T6D, BIOS 2.5.5 08/16/2017
[  122.166179] task: ffff8802ff1e8000 task.stack: ffffc90000130000
[  122.172785] RIP: 0010:bio_integrity_advance+0x3d/0xf0
[  122.178419] RSP: 0018:ffff88047fc03d70 EFLAGS: 00010006
[  122.184248] RAX: ffff880473b08000 RBX: ffff880458c71a80 RCX: ffff880473b08248
[  122.192209] RDX: 0000000000000000 RSI: 000000000000003c RDI: ffffc900038d7ba0
[  122.200171] RBP: ffff88047fc03d78 R08: 0000000000000001 R09: ffffffffa01a78b5
[  122.208132] R10: ffff88047fc1eda0 R11: ffff880458c71ad0 R12: 0000000000007800
[  122.216094] R13: 0000000000000000 R14: 0000000000007800 R15: ffff880473a39b40
[  122.224056] FS:  0000000000000000(0000) GS:ffff88047fc00000(0000) knlGS:0000000000000000
[  122.233083] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  122.239494] CR2: 000000000000000a CR3: 0000000001c09002 CR4: 00000000001606e0
[  122.247455] Call Trace:
[  122.250183]  <IRQ>
[  122.252429]  bio_advance+0x28/0xf0
[  122.256217]  blk_update_request+0xa1/0x310
[  122.260778]  blk_mq_end_request+0x1e/0x70
[  122.265256]  nvme_complete_rq+0x1c/0xd0 [nvme_core]
[  122.270699]  nvme_pci_complete_rq+0x85/0x130 [nvme]
[  122.276140]  __blk_mq_complete_request+0x8d/0x140
[  122.281387]  blk_mq_complete_request+0x16/0x20
[  122.286345]  nvme_process_cq+0xdd/0x1c0 [nvme]
[  122.291301]  nvme_irq+0x23/0x50 [nvme]
[  122.295485]  __handle_irq_event_percpu+0x3c/0x190
[  122.300725]  handle_irq_event_percpu+0x32/0x80
[  122.305683]  handle_irq_event+0x3b/0x60
[  122.309964]  handle_edge_irq+0x8f/0x190
[  122.314247]  handle_irq+0xab/0x120
[  122.318043]  do_IRQ+0x48/0xd0
[  122.321355]  common_interrupt+0x9d/0x9d
[  122.325625]  </IRQ>
[  122.327967] RIP: 0010:cpuidle_enter_state+0xe9/0x280
[  122.333504] RSP: 0018:ffffc90000133e68 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff35
[  122.341952] RAX: ffff88047fc1b900 RBX: ffff88047fc24400 RCX: 000000000000001f
[  122.349913] RDX: 0000000000000000 RSI: fffffcf2e6007295 RDI: 0000000000000000
[  122.357874] RBP: ffffc90000133ea0 R08: 000000000000062e R09: 0000000000000253
[  122.365836] R10: 0000000000000225 R11: 0000000000000018 R12: 0000000000000002
[  122.373797] R13: 0000000000000001 R14: ffff88047fc24400 R15: 0000001c6bd1d263
[  122.381762]  ? cpuidle_enter_state+0xc5/0x280
[  122.386623]  cpuidle_enter+0x17/0x20
[  122.390611]  call_cpuidle+0x23/0x40
[  122.394501]  do_idle+0x17e/0x1f0
[  122.398101]  cpu_startup_entry+0x73/0x80
[  122.402478]  start_secondary+0x178/0x1c0
[  122.406854]  secondary_startup_64+0xa5/0xa5
[  122.411520] Code: 48 8b 5f 68 48 8b 47 08 31 d2 4c 8b 5b 48 48 8b 80 d0 03 00 00 48 83 b8 48 02 00 00 00 48 8d 88 48 02 00 00 48 0f 45 d1 c1 ee 09 <0f> b6 4a 0a 0f b6 52 09 89 f0 48 01 73 08 83 e9 09 d3 e8 0f af
[  122.432604] RIP: bio_integrity_advance+0x3d/0xf0 RSP: ffff88047fc03d70
[  122.439888] CR2: 000000000000000a

Reported-by: Zhang Yi <yizhan at redhat.com>
Tested-by: Zhang Yi <yizhan at redhat.com>
Cc: Sagi Grimberg <sagi at grimberg.me>
Cc: Keith Busch <keith.busch at intel.com>
Cc: Johannes Thumshirn <jthumshirn at suse.de>
Signed-off-by: Ming Lei <ming.lei at redhat.com>
---
 drivers/nvme/host/core.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index f837d666cbd4..b40780be0a49 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -2965,8 +2965,6 @@ static void nvme_ns_remove(struct nvme_ns *ns)
 		return;
 
 	if (ns->disk && ns->disk->flags & GENHD_FL_UP) {
-		if (blk_get_integrity(ns->disk))
-			blk_integrity_unregister(ns->disk);
 		nvme_mpath_remove_disk_links(ns);
 		sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
 					&nvme_ns_id_attr_group);
@@ -2974,6 +2972,8 @@ static void nvme_ns_remove(struct nvme_ns *ns)
 			nvme_nvm_unregister_sysfs(ns);
 		del_gendisk(ns->disk);
 		blk_cleanup_queue(ns->queue);
+		if (blk_get_integrity(ns->disk))
+			blk_integrity_unregister(ns->disk);
 	}
 
 	mutex_lock(&ns->ctrl->subsys->lock);
-- 
2.9.5




More information about the Linux-nvme mailing list