WARNING: CPU: 3 PID: 261 at kernel/bpf/memalloc.c:342

Hou Tao houtao at huaweicloud.com
Fri Aug 25 20:48:13 PDT 2023


Hi,

On 8/25/2023 11:28 PM, Yonghong Song wrote:
>
>
> On 8/25/23 3:32 AM, Björn Töpel wrote:
>> I'm chasing a workqueue hang on RISC-V/qemu (TCG), using the bpf
>> selftests on bpf-next 9e3b47abeb8f.
>>
>> I'm able to reproduce the hang by multiple runs of:
>>   | ./test_progs -a link_api -a linked_list
>> I'm currently investigating that.
>>
>> But! Sometimes (every blue moon) I get a warn_on_once hit:
>>   | ------------[ cut here ]------------
>>   | WARNING: CPU: 3 PID: 261 at kernel/bpf/memalloc.c:342
>> bpf_mem_refill+0x1fc/0x206
>>   | Modules linked in: bpf_testmod(OE)
>>   | CPU: 3 PID: 261 Comm: test_progs-cpuv Tainted: G           OE   
>> N 6.5.0-rc5-01743-gdcb152bb8328 #2
>>   | Hardware name: riscv-virtio,qemu (DT)
>>   | epc : bpf_mem_refill+0x1fc/0x206
>>   |  ra : irq_work_single+0x68/0x70
>>   | epc : ffffffff801b1bc4 ra : ffffffff8015fe84 sp : ff2000000001be20
>>   |  gp : ffffffff82d26138 tp : ff6000008477a800 t0 : 0000000000046600
>>   |  t1 : ffffffff812b6ddc t2 : 0000000000000000 s0 : ff2000000001be70
>>   |  s1 : ff5ffffffffe8998 a0 : ff5ffffffffe8998 a1 : ff600003fef4b000
>>   |  a2 : 000000000000003f a3 : ffffffff80008250 a4 : 0000000000000060
>>   |  a5 : 0000000000000080 a6 : 0000000000000000 a7 : 0000000000735049
>>   |  s2 : ff5ffffffffe8998 s3 : 0000000000000022 s4 : 0000000000001000
>>   |  s5 : 0000000000000007 s6 : ff5ffffffffe8570 s7 : ffffffff82d6bd30
>>   |  s8 : 000000000000003f s9 : ffffffff82d2c5e8 s10: 000000000000ffff
>>   |  s11: ffffffff82d2c5d8 t3 : ffffffff81ea8f28 t4 : 0000000000000000
>>   |  t5 : ff6000008fd28278 t6 : 0000000000040000
>>   | status: 0000000200000100 badaddr: 0000000000000000 cause:
>> 0000000000000003
>>   | [<ffffffff801b1bc4>] bpf_mem_refill+0x1fc/0x206
>>   | [<ffffffff8015fe84>] irq_work_single+0x68/0x70
>>   | [<ffffffff8015feb4>] irq_work_run_list+0x28/0x36
>>   | [<ffffffff8015fefa>] irq_work_run+0x38/0x66
>>   | [<ffffffff8000828a>] handle_IPI+0x3a/0xb4
>>   | [<ffffffff800a5c3a>] handle_percpu_devid_irq+0xa4/0x1f8
>>   | [<ffffffff8009fafa>] generic_handle_domain_irq+0x28/0x36
>>   | [<ffffffff800ae570>] ipi_mux_process+0xac/0xfa
>>   | [<ffffffff8000a8ea>] sbi_ipi_handle+0x2e/0x88
>>   | [<ffffffff8009fafa>] generic_handle_domain_irq+0x28/0x36
>>   | [<ffffffff807ee70e>] riscv_intc_irq+0x36/0x4e
>>   | [<ffffffff812b5d3a>] handle_riscv_irq+0x54/0x86
>>   | [<ffffffff812b6904>] do_irq+0x66/0x98
>>   | ---[ end trace 0000000000000000 ]---
>>
>> Code:
>>   | static void free_bulk(struct bpf_mem_cache *c)
>>   | {
>>   |     struct bpf_mem_cache *tgt = c->tgt;
>>   |     struct llist_node *llnode, *t;
>>   |     unsigned long flags;
>>   |     int cnt;
>>   |
>>   |     WARN_ON_ONCE(tgt->unit_size != c->unit_size);
>>   | ...
>>
>> I'm not well versed in the memory allocator; Before I dive into it --
>> has anyone else hit it? Ideas on why the warn_on_once is hit?
>
> Maybe take a look at the patch
>   822fb26bdb55  bpf: Add a hint to allocated objects.
>
> In the above patch, we have
>
> +       /*
> +        * Remember bpf_mem_cache that allocated this object.
> +        * The hint is not accurate.
> +        */
> +       c->tgt = *(struct bpf_mem_cache **)llnode;
>
> I suspect that the warning may be related to the above.
> I tried the above ./test_progs command line (running multiple
> at the same time) and didn't trigger the issue.

The extra 8-bytes before the freed pointer is used to save the pointer
of the original bpf memory allocator where the freed pointer came from,
so unit_free() could free the pointer back to the original allocator to
prevent alloc-and-free unbalance.

I suspect that a wrong pointer was passed to bpf_obj_drop, but do not
find anything suspicious after checking linked_list. Another possibility
is that there is write-after-free problem which corrupts the extra
8-bytes before the freed pointer. Could you please apply the following
debug patch to check whether or not the extra 8-bytes are corrupted ?




>
>>
>>
>> Björn
>>
>
> .

-------------- next part --------------
From 69e9a281077eadcc73a49876ee6c4103ea94b257 Mon Sep 17 00:00:00 2001
From: Hou Tao <houtao1 at huawei.com>
Date: Sat, 26 Aug 2023 11:30:45 +0800
Subject: [PATCH] bpf: Debug for bpf_mem_free()

Signed-off-by: Hou Tao <houtao1 at huawei.com>
---
 kernel/bpf/memalloc.c | 18 +++++++++++++++++-
 1 file changed, 17 insertions(+), 1 deletion(-)

diff --git a/kernel/bpf/memalloc.c b/kernel/bpf/memalloc.c
index 662838a34629..fb4fa0605a60 100644
--- a/kernel/bpf/memalloc.c
+++ b/kernel/bpf/memalloc.c
@@ -830,6 +830,9 @@ void notrace *bpf_mem_alloc(struct bpf_mem_alloc *ma, size_t size)
 
 void notrace bpf_mem_free(struct bpf_mem_alloc *ma, void *ptr)
 {
+	struct bpf_mem_cache *from, *to;
+	struct bpf_mem_caches *cc;
+	static int once;
 	int idx;
 
 	if (!ptr)
@@ -839,7 +842,20 @@ void notrace bpf_mem_free(struct bpf_mem_alloc *ma, void *ptr)
 	if (idx < 0)
 		return;
 
-	unit_free(this_cpu_ptr(ma->caches)->cache + idx, ptr);
+	cc = this_cpu_ptr(ma->caches);
+	to = cc->cache + idx;
+	from = *(struct bpf_mem_cache **)(ptr - LLIST_NODE_SZ);
+	if (!once && to->unit_size != from->unit_size) {
+		once = true;
+		pr_err("bad cache %px: got size %u work %px, cache %px exp size %u work %px\n",
+		       from, from->unit_size, from->refill_work.func,
+		       to, to->unit_size, to->refill_work.func);
+		WARN_ON(1);
+		print_hex_dump(KERN_ERR, "", DUMP_PREFIX_OFFSET, 16, 1,
+			       from, sizeof(*from), false);
+	}
+
+	unit_free(to, ptr);
 }
 
 void notrace bpf_mem_free_rcu(struct bpf_mem_alloc *ma, void *ptr)
-- 
2.29.2



More information about the linux-riscv mailing list