[bug report] nvme_auth: kmemleak observed with blktests nvme/tcp nvme/062
Yi Zhang
yi.zhang at redhat.com
Fri Apr 25 00:31:10 PDT 2025
On Thu, Apr 24, 2025 at 9:38 PM Hannes Reinecke <hare at suse.de> wrote:
>
> On 4/24/25 14:53, Yi Zhang wrote:
> > Hi
> > I found this kmemleak when running the blktests on the latest
> > linux-block/for-next, please help check it and let me know if you need
> > any test/info for it, thanks.
> >
> > # nvme_trtype=tcp ./check nvme/063
> > nvme/063 (tr=tcp) (Create authenticated TCP connections with secure
> > concatenation)
> > runtime 8.748s ...
> > WARNING: Test did not clean up tcp device: nvme6
> > WARNING: Test did not clean up port: 0
> > WARNING: Test did not clean up subsystem: blktests-subsystem-1
> > rmdir: failed to remove
> > '/sys/kernel/config/nvmet//subsystems/blktests-subsystem-1': Directory
> > not empty
> > nvme/063 (tr=tcp) (Create authenticated TCP connections with secure
> > concatenation) [failed]3-51e60b8de349
> > runtime 8.748s ...
> > 8.261srnel/config/nvmet//hosts/nqn.2014-08.org.nvmexpress:uuid:0f01fb42-9f7f-4856-b0b3-51e60b8de349':
> > Device or resource busy
> > --- tests/nvme/063.out 2025-04-20 21:27:21.084101153 -0400
> > +++ /root/blktests/results/nodev_tr_tcp/nvme/063.out.bad
> > 2025-04-24 08:47:10.951187723 -0400
> > @@ -3,5 +3,4 @@
> > Reset controller
> > disconnected 1 controller(s)
> > Test secure concatenation with SHA384
> > -disconnected 1 controller(s)
> > -Test complete
> > +WARNING: connection is not encrypted
> > WARNING: Test did not clean up subsystem: blktests-subsystem-1
> > rmdir: failed to remove
> > '/sys/kernel/config/nvmet//subsystems/blktests-subsystem-1': Directory
> > not empty
> > WARNING: Test did not clean up host:
> > nqn.2014-08.org.nvmexpress:uuid:0f01fb42-9f7f-4856-b0b3-51e60b8de349
> > rmdir: failed to remove
> > '/sys/kernel/config/nvmet//hosts/nqn.2014-08.org.nvmexpress:uuid:0f01fb42-9f7f-4856-b0b3-51e60b8de349':
> > Device or resource busy
> >
> > # cat /sys/kernel/debug/kmemleak
> > unreferenced object 0xffff8964e25a4180 (size 32):
> > comm "kworker/13:1H", pid 576, jiffies 4295435801
> > hex dump (first 32 bytes):
> > f1 c3 44 62 7d b7 12 57 34 f6 0d 61 33 f6 d0 63 ..Db}..W4..a3..c
> > 5d 27 ff 34 d4 2f da 08 04 9c 32 f2 e4 fe 4f a6 ]'.4./....2...O.
> > backtrace (crc 6e2b0dcd):
> > __kmalloc_noprof+0x379/0x4a0
> > nvme_auth_derive_tls_psk+0x180/0xde0 [nvme_auth]
> > nvmet_auth_insert_psk+0xc8/0x210 [nvmet]
> > nvmet_auth_reply+0x39d/0x3b0 [nvmet]
> > nvmet_execute_auth_send+0x488/0x550 [nvmet]
> > 0xffffffffc07f04f1
> > process_one_work+0x25b/0x6b0
> > worker_thread+0x19a/0x350
> > kthread+0x11b/0x260
> > ret_from_fork+0x30/0x50
> > ret_from_fork_asm+0x1a/0x30
> > unreferenced object 0xffff896560245800 (size 32):
> > comm "kworker/0:1H", pid 560, jiffies 4295437983
> > hex dump (first 32 bytes):
> > 22 92 b5 6a 0c 4c ab 38 fa 6a c4 f7 32 91 ff 4f "..j.L.8.j..2..O
> > b2 e8 ab 92 52 c8 99 fe c8 f0 1d 53 cb b8 3d ff ....R......S..=.
> > backtrace (crc 2f0a5d3c):
> > __kmalloc_noprof+0x379/0x4a0
> > nvme_auth_derive_tls_psk+0x180/0xde0 [nvme_auth]
> > nvmet_auth_insert_psk+0xc8/0x210 [nvmet]
> > nvmet_auth_reply+0x39d/0x3b0 [nvmet]
> > nvmet_execute_auth_send+0x488/0x550 [nvmet]
> > 0xffffffffc07f04f1
> > process_one_work+0x25b/0x6b0
> > worker_thread+0x19a/0x350
> > kthread+0x11b/0x260
> > ret_from_fork+0x30/0x50
> > ret_from_fork_asm+0x1a/0x30
> > unreferenced object 0xffff8965c134e300 (size 64):
> > comm "kworker/6:2H", pid 2766, jiffies 4295441704
> > hex dump (first 32 bytes):
> > bb 40 af 2e cc 9b c9 cf b6 a9 f1 c8 63 12 be 3e . at ..........c..>
> > 82 75 8f b0 c1 af 3d ef 9b 5e 88 2e c1 ac 0f 85 .u....=..^......
> > backtrace (crc ac0b7882):
> > __kmalloc_noprof+0x379/0x4a0
> > nvme_auth_derive_tls_psk+0x180/0xde0 [nvme_auth]
> > nvmet_auth_insert_psk+0xc8/0x210 [nvmet]
> > nvmet_auth_reply+0x39d/0x3b0 [nvmet]
> > nvmet_execute_auth_send+0x488/0x550 [nvmet]
> > 0xffffffffc07f04f1
> > process_one_work+0x25b/0x6b0
> > worker_thread+0x19a/0x350
> > kthread+0x11b/0x260
> > ret_from_fork+0x30/0x50
> > ret_from_fork_asm+0x1a/0x30
> >
> >
> > (gdb) l *(nvme_auth_derive_tls_psk+0x180)
> > 0x13a0 is in nvme_auth_derive_tls_psk (drivers/nvme/common/auth.c:789).
> > 784 put_unaligned_be16(psk_len, info);
> > 785 memcpy(info + 2, psk_prefix, strlen(psk_prefix));
> > 786 sprintf(info + 2 + strlen(psk_prefix), "%02d %s", hmac_id, psk_digest);
> > 787
> > 788 tls_key = kzalloc(psk_len, GFP_KERNEL);
> > 789 if (!tls_key) {
> > 790 ret = -ENOMEM;
> > 791 goto out_free_info;
> > 792 }
> > 793 ret = hkdf_expand(hmac_tfm, info, info_len, tls_key, psk_len);
> >
> >
> >
> Can you try this patch?
Hi Hannes
The kmemleak still can be reproduced with the change.
I also tried the change below provided by Maurizio today, and it fixed
the kmemleak.
diff --git a/drivers/nvme/target/auth.c b/drivers/nvme/target/auth.c
index cef8d77f477b..fd167fc9bad4 100644
--- a/drivers/nvme/target/auth.c
+++ b/drivers/nvme/target/auth.c
@@ -606,7 +606,7 @@ void nvmet_auth_insert_psk(struct nvmet_sq *sq)
key_put(sq->ctrl->tls_key);
sq->ctrl->tls_key = tls_key;
#endif
-
+ kfree_sensitive(tls_psk);
out_free_digest:
kfree_sensitive(digest);
out_free_psk:
>
> diff --git a/drivers/nvme/target/auth.c b/drivers/nvme/target/auth.c
> index e7d82bc32f41..1ed606892a8a 100644
> --- a/drivers/nvme/target/auth.c
> +++ b/drivers/nvme/target/auth.c
> @@ -669,6 +669,8 @@ void nvmet_auth_insert_psk(struct nvmet_sq *sq)
> if (sq->ctrl->tls_key)
> key_put(sq->ctrl->tls_key);
> sq->ctrl->tls_key = tls_key;
> +#else
> + kfree_sensitive(tls_psk);
> #endif
>
> out_free_digest:
>
> Cheers,
>
> Hannes
> --
> Dr. Hannes Reinecke Kernel Storage Architect
> hare at suse.de +49 911 74053 688
> SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
> HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich
>
--
Best Regards,
Yi Zhang
More information about the Linux-nvme
mailing list