nvme: enable char device per namespace
Javier González
javier at javigon.com
Tue Jan 12 13:30:55 EST 2021
On 12.01.2021 18:22, Minwoo Im wrote:
>Hello Javier,
>
>I tested this patch based on nvme-5.11:
>
>[ 1.219747] BUG: unable to handle page fault for address: 0000000100000041
>[ 1.220518] #PF: supervisor read access in kernel mode
>[ 1.220582] #PF: error_code(0x0000) - not-present page
>[ 1.220582] PGD 0 P4D 0
>[ 1.220582] Oops: 0000 [#1] SMP PTI
>[ 1.220582] CPU: 0 PID: 7 Comm: kworker/u16:0 Not tainted 5.11.0-rc1+ #46
>[ 1.220582] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
>[ 1.220582] Workqueue: nvme-wq nvme_scan_work
>[ 1.220582] RIP: 0010:nvme_ns_id_attrs_are_visible+0x10f/0x152
>[ 1.220582] Code: 81 7d d0 80 f9 a1 82 74 0a 48 81 7d d0 a0 f9 a1 82 75 50 48 8b 45 e8 48 89 45 f8 48 8b 45 f8 48 83 e8 60 48 8b 80 60 03 00 00 <48> 8b 40 40 48 3d e0 d1 4d 82 74 07 b8 00 00 00 00 eb 2e 48 8b 45
>[ 1.220582] RSP: 0000:ffffc90000047b70 EFLAGS: 00010282
>[ 1.220582] RAX: 0000000100000001 RBX: ffffffff824ddb20 RCX: 0000000000000124
>[ 1.220582] RDX: ffff8881026eac00 RSI: ffffffff82a1f980 RDI: ffff888102745058
>[ 1.220582] RBP: ffffc90000047ba8 R08: ffff888102948718 R09: 0000000000000000
>[ 1.220582] R10: 0000000000000000 R11: ffff888100465080 R12: ffff888102745058
>[ 1.220582] R13: ffff888102948600 R14: 0000000000000000 R15: ffffffff82a1f548
>[ 1.220582] FS: 0000000000000000(0000) GS:ffff88842fc00000(0000) knlGS:0000000000000000
>[ 1.220582] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>[ 1.220582] CR2: 0000000100000041 CR3: 000000000280c001 CR4: 0000000000370ef0
>[ 1.220582] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
>[ 1.220582] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
>[ 1.220582] Call Trace:
>[ 1.220582] internal_create_group+0xde/0x390
>[ 1.220582] internal_create_groups.part.4+0x3e/0xa0
>[ 1.220582] device_add+0x3cf/0x830
>[ 1.220582] ? cdev_get+0x20/0x20
>[ 1.220582] ? cdev_purge+0x60/0x60
>[ 1.220582] cdev_device_add+0x44/0x70
>[ 1.220582] ? cdev_init+0x50/0x60
>[ 1.220582] nvme_alloc_chardev_ns+0x187/0x1eb
>[ 1.220582] nvme_alloc_ns+0x367/0x460
>[ 1.220582] nvme_validate_or_alloc_ns+0xe2/0x139
>[ 1.220582] nvme_scan_ns_list+0x113/0x17a
>[ 1.220582] nvme_scan_work+0xa5/0x106
>[ 1.220582] process_one_work+0x1dd/0x3e0
>[ 1.220582] worker_thread+0x2d/0x3b0
>[ 1.220582] ? cancel_delayed_work+0x90/0x90
>[ 1.220582] kthread+0x117/0x130
>[ 1.220582] ? kthread_park+0x90/0x90
>[ 1.220582] ret_from_fork+0x22/0x30
>[ 1.220582] Modules linked in:
>[ 1.220582] CR2: 0000000100000041
>[ 1.220582] ---[ end trace b1f509a1bbfbc113 ]---
>[ 1.220582] RIP: 0010:nvme_ns_id_attrs_are_visible+0x10f/0x152
>[ 1.220582] Code: 81 7d d0 80 f9 a1 82 74 0a 48 81 7d d0 a0 f9 a1 82 75 50 48 8b 45 e8 48 89 45 f8 48 8b 45 f8 48 83 e8 60 48 8b 80 60 03 00 00 <48> 8b 40 40 48 3d e0 d1 4d 82 74 07 b8 00 00 00 00 eb 2e 48 8b 45
>[ 1.220582] RSP: 0000:ffffc90000047b70 EFLAGS: 00010282
>[ 1.220582] RAX: 0000000100000001 RBX: ffffffff824ddb20 RCX: 0000000000000124
>[ 1.220582] RDX: ffff8881026eac00 RSI: ffffffff82a1f980 RDI: ffff888102745058
>[ 1.220582] RBP: ffffc90000047ba8 R08: ffff888102948718 R09: 0000000000000000
>[ 1.220582] R10: 0000000000000000 R11: ffff888100465080 R12: ffff888102745058
>[ 1.220582] R13: ffff888102948600 R14: 0000000000000000 R15: ffffffff82a1f548
>[ 1.220582] FS: 0000000000000000(0000) GS:ffff88842fc00000(0000) knlGS:0000000000000000
>[ 1.220582] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>[ 1.220582] CR2: 0000000100000041 CR3: 000000000280c001 CR4: 0000000000370ef0
>[ 1.220582] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
>[ 1.220582] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
>
>And this happens when CONFIG_NVME_MULTIPATH=y configured. Please refere
>attached log up there :)
>
>Thanks!
I have not implemented multipath support, but it should definitely not
crash like this. I'll rebase to 5.11 and test.
Christoph: Is it OK to send this without multipath or should I just send
all together?
Javier
More information about the Linux-nvme
mailing list