[PATCH bpf 3/3] selftests/bpf: Adjust wasted entries threshold for ARM64 BRBE

Puranjay Mohan puranjay at kernel.org
Fri Mar 13 11:03:34 PDT 2026


The get_branch_snapshot test checks that bpf_get_branch_snapshot()
doesn't waste too many branch entries on infrastructure overhead. The
threshold of < 10 was calibrated for x86 where about 7 entries are
wasted.

On ARM64, the BPF trampoline generates more branches than x86,
resulting in about 13 wasted entries. The overhead comes from
__bpf_prog_exit_recur which on ARM64 makes out-of-line calls to
__rcu_read_unlock and generates more conditional branches than x86:

  [#24] dump_bpf_prog+0x118d0       ->  __bpf_prog_exit_recur+0x0
  [#23] __bpf_prog_exit_recur+0x78  ->  __bpf_prog_exit_recur+0xf4
  [#22] __bpf_prog_exit_recur+0xf8  ->  __bpf_prog_exit_recur+0x80
  [#21] __bpf_prog_exit_recur+0x80  ->  __rcu_read_unlock+0x0
  [#20] __rcu_read_unlock+0x24      ->  __bpf_prog_exit_recur+0x84
  [#19] __bpf_prog_exit_recur+0xe0  ->  __bpf_prog_exit_recur+0x11c
  [#18] __bpf_prog_exit_recur+0x120 ->  __bpf_prog_exit_recur+0xe8
  [#17] __bpf_prog_exit_recur+0xf0  ->  dump_bpf_prog+0x118d4

Increase the threshold to < 16 to accommodate ARM64.

The test passes after the change:

 [root@(none) bpf]# ./test_progs -t get_branch_snapshot
 #136     get_branch_snapshot:OK
 Summary: 1/0 PASSED, 0 SKIPPED, 0 FAILED

Signed-off-by: Puranjay Mohan <puranjay at kernel.org>
---
 .../selftests/bpf/prog_tests/get_branch_snapshot.c       | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/tools/testing/selftests/bpf/prog_tests/get_branch_snapshot.c b/tools/testing/selftests/bpf/prog_tests/get_branch_snapshot.c
index 0394a1156d99..dcb0ba3d6285 100644
--- a/tools/testing/selftests/bpf/prog_tests/get_branch_snapshot.c
+++ b/tools/testing/selftests/bpf/prog_tests/get_branch_snapshot.c
@@ -116,13 +116,14 @@ void serial_test_get_branch_snapshot(void)
 
 	ASSERT_GT(skel->bss->test1_hits, 6, "find_looptest_in_lbr");
 
-	/* Given we stop LBR in software, we will waste a few entries.
+	/* Given we stop LBR/BRBE in software, we will waste a few entries.
 	 * But we should try to waste as few as possible entries. We are at
-	 * about 7 on x86_64 systems.
-	 * Add a check for < 10 so that we get heads-up when something
+	 * about 7 on x86_64 and about 13 on arm64 systems (the arm64 BPF
+	 * trampoline generates more branches than x86_64).
+	 * Add a check for < 16 so that we get heads-up when something
 	 * changes and wastes too many entries.
 	 */
-	ASSERT_LT(skel->bss->wasted_entries, 10, "check_wasted_entries");
+	ASSERT_LT(skel->bss->wasted_entries, 16, "check_wasted_entries");
 
 cleanup:
 	get_branch_snapshot__destroy(skel);
-- 
2.52.0




More information about the linux-arm-kernel mailing list