[PATCHv3] nvme: authentication error are always non-retryable

Hannes Reinecke hare at suse.de
Mon Feb 26 23:54:32 PST 2024


On 2/27/24 08:32, Daniel Wagner wrote:
> On Mon, Feb 26, 2024 at 03:30:13PM +0100, Hannes Reinecke wrote:
>> From: Hannes Reinecke <hare at suse.de>
>>
>> Any authentication errors which are generated internally are always
>> non-retryable, so set the DNR bit to ensure they are not retried.
>>
>> Signed-off-by: Hannes Reinecke <hare at suse.de>
> 
> This replaces my hacky version 'nvme-fc: do not retry when auth fails or
> connection is refused'
> 
> Tested-by: Daniel Wagner <dwagner at suse.de>
> Reviewed-by: Daniel Wagner <dwagner at suse.de>
> 
> But with this patch at least two UAF are uncovered. I've already
> identified the first one (see comments on the v2 of this patch). The
> second one is gets triggered by loop on nvme/045
> 
> [47923.100856] [10844] nvmet: nvmet_execute_auth_send: ctrl 1 qid 0 type 0 id 0 step 4                                08:04:39 [63/9375]
> [47923.102798] [10844] nvmet: nvmet_execute_auth_send: ctrl 1 qid 0 reset negotiation
> [47923.104447] [10844] nvmet: check nqn.2014-08.org.nvmexpress:uuid:0f01fb42-9f7f-4856-b0b3-51e60b8de349
> [47923.106278] [10844] nvmet: nvmet_setup_dhgroup: ctrl 1 selecting dhgroup 1
> [47923.107896] [10844] nvmet: nvmet_setup_dhgroup: ctrl 1 reuse existing DH group 1
> [47923.109624] [10844] nvmet: Re-use existing hash ID 1
> [47923.115167] ==================================================================
> [47923.117175] BUG: KASAN: slab-use-after-free in base64_decode+0x10e/0x170
> [47923.117280] Read of size 1 at addr ffff88810a1d360a by task kworker/2:14/10844
> 
> [47923.119954] CPU: 2 PID: 10844 Comm: kworker/2:14 Tainted: G        W    L     6.8.0-rc3+ #39 3d0b6128d1ea3c6026a2c1de70ba6c7dc10623c3
> [47923.119954] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 2/2/2022
> [47923.123853] Workqueue: nvmet-wq nvme_loop_execute_work [nvme_loop]
> [47923.123853] Call Trace:
> [47923.123853]  <TASK>
> [47923.123853]  dump_stack_lvl+0x5b/0x80
> [47923.127843]  print_report+0x163/0x800
> [47923.127843]  ? __virt_addr_valid+0x2f3/0x340
> [47923.127843]  ? base64_decode+0x10e/0x170
> [47923.131370]  kasan_report+0xd0/0x110
> [47923.131370]  ? base64_decode+0x10e/0x170
> [47923.131370]  base64_decode+0x10e/0x170
> [47923.131370]  nvme_auth_extract_key+0xbd/0x290 [nvme_auth c9862b6e632ff3757fc5af136ad323c2fd7ec3cc]
> [47923.131370]  nvmet_setup_auth+0x3e6/0x980 [nvmet 5699c49016b7caa62e59f3ad4cb1d5fc35e0accf]
> [47923.135839]  nvmet_execute_auth_send+0x5f6/0x1890 [nvmet 5699c49016b7caa62e59f3ad4cb1d5fc35e0accf]
> [47923.135839]  ? process_scheduled_works+0x6d4/0xf80
> [47923.138829]  process_scheduled_works+0x774/0xf80
> [47923.140513]  worker_thread+0x8c4/0xfc0
> [47923.140513]  ? __kthread_parkme+0x84/0x120
> [47923.145904]  kthread+0x25d/0x2e0
> [47923.145904]  ? __cfi_worker_thread+0x10/0x10
> [47923.145904]  ? __cfi_kthread+0x10/0x10
> [47923.145904]  ret_from_fork+0x41/0x70
> [47923.145904]  ? __cfi_kthread+0x10/0x10
> [47923.145904]  ret_from_fork_asm+0x1b/0x30
> [47923.151823]  </TASK>
> 
> [47923.153891] Allocated by task 14645 on cpu 2 at 47922.532579s:
> [47923.153891]  kasan_save_track+0x2c/0x90
> [47923.153891]  __kasan_kmalloc+0x89/0xa0
> [47923.153891]  __kmalloc_node_track_caller+0x23d/0x4e0
> [47923.153891]  kstrdup+0x34/0x60
> [47923.153891]  nvmet_auth_set_key+0xa7/0x2a0 [nvmet]
> [47923.159060]  nvmet_host_dhchap_key_store+0x10/0x20 [nvmet]
> [47923.159848]  configfs_write_iter+0x2ea/0x3c0
> [47923.159848]  vfs_write+0x80c/0xb60
> [47923.159848]  ksys_write+0xd7/0x1a0
> [47923.162639]  do_syscall_64+0xb1/0x180
> [47923.162639]  entry_SYSCALL_64_after_hwframe+0x6e/0x76
> 
> [47923.164392] Freed by task 14645 on cpu 1 at 47923.114374s:
> [47923.164392]  kasan_save_track+0x2c/0x90
> [47923.164392]  kasan_save_free_info+0x4a/0x60
> [47923.164392]  poison_slab_object+0x108/0x180
> [47923.164392]  __kasan_slab_free+0x33/0x80
> [47923.164392]  kfree+0x119/0x310
> [47923.164392]  nvmet_auth_set_key+0x175/0x2a0 [nvmet]
> [47923.164392]  nvmet_host_dhchap_key_store+0x10/0x20 [nvmet]
> [47923.164392]  configfs_write_iter+0x2ea/0x3c0
> [47923.164392]  vfs_write+0x80c/0xb60
> [47923.164392]  ksys_write+0xd7/0x1a0
> [47923.164392]  do_syscall_64+0xb1/0x180
> [47923.164392]  entry_SYSCALL_64_after_hwframe+0x6e/0x76
> 
> [47923.176385] The buggy address belongs to the object at ffff88810a1d3600
>                  which belongs to the cache kmalloc-64 of size 64
> [47923.178122] The buggy address is located 10 bytes inside of
>                  freed 64-byte region [ffff88810a1d3600, ffff88810a1d3640)
> 
> [47923.178122] The buggy address belongs to the physical page:
> [47923.178122] page:00000000651bcfd3 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x10a1d3
> [47923.178122] ksm flags: 0x17ffffc0000800(slab|node=0|zone=2|lastcpupid=0x1fffff)
> [47923.184115] page_type: 0xffffffff()
> [47923.184115] raw: 0017ffffc0000800 ffff888100042780 ffffea00044e7dc0 dead000000000003
> [47923.184115] raw: 0000000000000000 0000000000150015 00000001ffffffff 0000000000000000
> [47923.187818] page dumped because: kasan: bad access detected
> 
> [47923.187818] Memory state around the buggy address:
> [47923.187818]  ffff88810a1d3500: fc fc fc fc fc fc fc fc 00 00 00 00 00 00 00 00
> [47923.187818]  ffff88810a1d3580: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
> [47923.192381] >ffff88810a1d3600: fa fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
> [47923.192381]                       ^
> [47923.192381]  ffff88810a1d3680: fc fc fc fc fc fc fc fc 00 00 00 00 00 00 00 00
> [47923.192381]  ffff88810a1d3700: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
> [47923.192381] ==================================================================

Ouch.

Does this help?

diff --git a/drivers/nvme/target/auth.c b/drivers/nvme/target/auth.c
index 3ddbc3880cac..9afc28f1ffac 100644
--- a/drivers/nvme/target/auth.c
+++ b/drivers/nvme/target/auth.c
@@ -44,6 +44,7 @@ int nvmet_auth_set_key(struct nvmet_host *host, const 
char *secret,
         dhchap_secret = kstrdup(secret, GFP_KERNEL);
         if (!dhchap_secret)
                 return -ENOMEM;
+       down_write(&nvmet_config_sem);
         if (set_ctrl) {
                 kfree(host->dhchap_ctrl_secret);
                 host->dhchap_ctrl_secret = strim(dhchap_secret);
@@ -53,6 +54,7 @@ int nvmet_auth_set_key(struct nvmet_host *host, const 
char *secret,
                 host->dhchap_secret = strim(dhchap_secret);
                 host->dhchap_key_hash = key_hash;
         }
+       up_write(&nvmet_config_sem);
         return 0;
  }

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare at suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich




More information about the Linux-nvme mailing list