[PATCH] nvme: fix admin request_queue lifetime

Casey Chen cachen at purestorage.com
Tue Nov 4 15:22:27 PST 2025


Looks good. Thanks

On Tue, Nov 4, 2025 at 3:08 PM Chaitanya Kulkarni <chaitanyak at nvidia.com> wrote:
>
> On 11/4/25 14:59, Keith Busch wrote:
> > From: Keith Busch<kbusch at kernel.org>
> >
> > The namespaces can access the controller's admin request_queue, and
> > stale references on the namespaces may exist. Ensure the request_queue
> > is active by moving the controller's 'put' after all references on the
> > controller have been released to ensure no one is trying to access the
> > request_queue. This fixes a reported use-after-free bug:
> >
> >    BUG: KASAN: slab-use-after-free in blk_queue_enter+0x41c/0x4a0
> >    Read of size 8 at addr ffff88c0a53819f8 by task nvme/3287
> >    CPU: 67 UID: 0 PID: 3287 Comm: nvme Tainted: G            E       6.13.2-ga1582f1a031e #15
> >    Tainted: [E]=UNSIGNED_MODULE
> >    Hardware name: Jabil /EGS 2S MB1, BIOS 1.00 06/18/2025
> >    Call Trace:
> >     <TASK>
> >     dump_stack_lvl+0x4f/0x60
> >     print_report+0xc4/0x620
> >     ? _raw_spin_lock_irqsave+0x70/0xb0
> >     ? _raw_read_unlock_irqrestore+0x30/0x30
> >     ? blk_queue_enter+0x41c/0x4a0
> >     kasan_report+0xab/0xe0
> >     ? blk_queue_enter+0x41c/0x4a0
> >     blk_queue_enter+0x41c/0x4a0
> >     ? __irq_work_queue_local+0x75/0x1d0
> >     ? blk_queue_start_drain+0x70/0x70
> >     ? irq_work_queue+0x18/0x20
> >     ? vprintk_emit.part.0+0x1cc/0x350
> >     ? wake_up_klogd_work_func+0x60/0x60
> >     blk_mq_alloc_request+0x2b7/0x6b0
> >     ? __blk_mq_alloc_requests+0x1060/0x1060
> >     ? __switch_to+0x5b7/0x1060
> >     nvme_submit_user_cmd+0xa9/0x330
> >     nvme_user_cmd.isra.0+0x240/0x3f0
> >     ? force_sigsegv+0xe0/0xe0
> >     ? nvme_user_cmd64+0x400/0x400
> >     ? vfs_fileattr_set+0x9b0/0x9b0
> >     ? cgroup_update_frozen_flag+0x24/0x1c0
> >     ? cgroup_leave_frozen+0x204/0x330
> >     ? nvme_ioctl+0x7c/0x2c0
> >     blkdev_ioctl+0x1a8/0x4d0
> >     ? blkdev_common_ioctl+0x1930/0x1930
> >     ? fdget+0x54/0x380
> >     __x64_sys_ioctl+0x129/0x190
> >     do_syscall_64+0x5b/0x160
> >     entry_SYSCALL_64_after_hwframe+0x4b/0x53
> >    RIP: 0033:0x7f765f703b0b
> >    Code: ff ff ff 85 c0 79 9b 49 c7 c4 ff ff ff ff 5b 5d 4c 89 e0 41 5c c3 66 0f 1f 84 00 00 00 00 00 f3 0f 1e fa b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d dd 52 0f 00 f7 d8 64 89 01 48
> >    RSP: 002b:00007ffe2cefe808 EFLAGS: 00000202 ORIG_RAX: 0000000000000010
> >    RAX: ffffffffffffffda RBX: 00007ffe2cefe860 RCX: 00007f765f703b0b
> >    RDX: 00007ffe2cefe860 RSI: 00000000c0484e41 RDI: 0000000000000003
> >    RBP: 0000000000000000 R08: 0000000000000003 R09: 0000000000000000
> >    R10: 00007f765f611d50 R11: 0000000000000202 R12: 0000000000000003
> >    R13: 00000000c0484e41 R14: 0000000000000001 R15: 00007ffe2cefea60
> >     </TASK>
> >
> > Reported-by: Casey Chen<cachen at purestorage.com>
> > Signed-off-by: Keith Busch<kbusch at kernel.org>
> > ---
>
>
> Looks good.
>
> Reviewed-by: Chaitanya Kulkarni <kch at nvidia.com>
>
> -ck
>
>



More information about the Linux-nvme mailing list