[bug report] kmemleak observed with blktests nvme/tcp

Sagi Grimberg sagi at grimberg.me
Mon Apr 22 03:46:02 PDT 2024



On 22/04/2024 7:59, Yi Zhang wrote:
> On Sun, Apr 21, 2024 at 6:31 PM Sagi Grimberg <sagi at grimberg.me> wrote:
>>
>>
>> On 16/04/2024 6:19, Chaitanya Kulkarni wrote:
>>> +linux-nvme list for awareness ...
>>>
>>> -ck
>>>
>>>
>>> On 4/6/24 17:38, Yi Zhang wrote:
>>>> Hello
>>>>
>>>> I found the kmemleak issue after blktests nvme/tcp tests on the latest
>>>> linux-block/for-next, please help check it and let me know if you need
>>>> any info/testing for it, thanks.
>>> it will help others to specify which testcase you are using ...
>>>
>>>> # dmesg | grep kmemleak
>>>> [ 2580.572467] kmemleak: 92 new suspected memory leaks (see
>>>> /sys/kernel/debug/kmemleak)
>>>>
>>>> # cat kmemleak.log
>>>> unreferenced object 0xffff8885a1abe740 (size 32):
>>>>      comm "kworker/40:1H", pid 799, jiffies 4296062986
>>>>      hex dump (first 32 bytes):
>>>>        c2 4a 4a 04 00 ea ff ff 00 00 00 00 00 10 00 00  .JJ.............
>>>>        00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
>>>>      backtrace (crc 6328eade):
>>>>        [<ffffffffa7f2657c>] __kmalloc+0x37c/0x480
>>>>        [<ffffffffa86a9b1f>] sgl_alloc_order+0x7f/0x360
>>>>        [<ffffffffc261f6c5>] lo_read_simple+0x1d5/0x5b0 [loop]
>>>>        [<ffffffffc26287ef>] 0xffffffffc26287ef
>>>>        [<ffffffffc262a2c4>] 0xffffffffc262a2c4
>>>>        [<ffffffffc262a881>] 0xffffffffc262a881
>>>>        [<ffffffffa76adf3c>] process_one_work+0x89c/0x19f0
>>>>        [<ffffffffa76b0813>] worker_thread+0x583/0xd20
>>>>        [<ffffffffa76ce2a3>] kthread+0x2f3/0x3e0
>>>>        [<ffffffffa74a804d>] ret_from_fork+0x2d/0x70
>>>>        [<ffffffffa7406e4a>] ret_from_fork_asm+0x1a/0x30
>>>> unreferenced object 0xffff88a8b03647c0 (size 16):
>>>>      comm "kworker/40:1H", pid 799, jiffies 4296062986
>>>>      hex dump (first 16 bytes):
>>>>        c0 4a 4a 04 00 ea ff ff 00 10 00 00 00 00 00 00  .JJ.............
>>>>      backtrace (crc 860ce62b):
>>>>        [<ffffffffa7f2657c>] __kmalloc+0x37c/0x480
>>>>        [<ffffffffc261f805>] lo_read_simple+0x315/0x5b0 [loop]
>>>>        [<ffffffffc26287ef>] 0xffffffffc26287ef
>>>>        [<ffffffffc262a2c4>] 0xffffffffc262a2c4
>>>>        [<ffffffffc262a881>] 0xffffffffc262a881
>>>>        [<ffffffffa76adf3c>] process_one_work+0x89c/0x19f0
>>>>        [<ffffffffa76b0813>] worker_thread+0x583/0xd20
>>>>        [<ffffffffa76ce2a3>] kthread+0x2f3/0x3e0
>>>>        [<ffffffffa74a804d>] ret_from_fork+0x2d/0x70
>>>>        [<ffffffffa7406e4a>] ret_from_fork_asm+0x1a/0x30
>> kmemleak suggest that the leakage is coming from lo_read_simple() Is
>> this a regression that can be bisected?
>>
> It's not one regression issue, I tried 6.7 and it also can be reproduced.

Its strange that the stack makes it look like lo_read_simple is allocating
the sgl, it is probably nvmet-tcp though.

Can you try with the patch below:
--
diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
index a5422e2c979a..bfd1cf7cc1c2 100644
--- a/drivers/nvme/target/tcp.c
+++ b/drivers/nvme/target/tcp.c
@@ -348,6 +348,7 @@ static int nvmet_tcp_check_ddgst(struct 
nvmet_tcp_queue *queue, void *pdu)
         return 0;
  }

+/* safe to call multiple times */
  static void nvmet_tcp_free_cmd_buffers(struct nvmet_tcp_cmd *cmd)
  {
         kfree(cmd->iov);
@@ -1581,13 +1582,9 @@ static void 
nvmet_tcp_free_cmd_data_in_buffers(struct nvmet_tcp_queue *queue)
         struct nvmet_tcp_cmd *cmd = queue->cmds;
         int i;

-       for (i = 0; i < queue->nr_cmds; i++, cmd++) {
-               if (nvmet_tcp_need_data_in(cmd))
-                       nvmet_tcp_free_cmd_buffers(cmd);
-       }
-
-       if (!queue->nr_cmds && nvmet_tcp_need_data_in(&queue->connect))
-               nvmet_tcp_free_cmd_buffers(&queue->connect);
+       for (i = 0; i < queue->nr_cmds; i++, cmd++)
+               nvmet_tcp_free_cmd_buffers(cmd);
+       nvmet_tcp_free_cmd_buffers(&queue->connect);
  }

  static void nvmet_tcp_release_queue_work(struct work_struct *w)
--



More information about the Linux-nvme mailing list