[PATCH bpf-next 0/2] bpf, riscv: use BPF prog pack allocator in BPF JIT
Björn Töpel
bjorn at kernel.org
Sun Aug 13 13:27:54 PDT 2023
Puranjay Mohan <puranjay12 at gmail.com> writes:
> BPF programs currently consume a page each on RISCV. For systems with many BPF
> programs, this adds significant pressure to instruction TLB. High iTLB pressure
> usually causes slow down for the whole system.
>
> Song Liu introduced the BPF prog pack allocator[1] to mitigate the above issue.
> It packs multiple BPF programs into a single huge page. It is currently only
> enabled for the x86_64 BPF JIT.
>
> I enabled this allocator on the ARM64 BPF JIT[2]. It is being reviewed now.
>
> This patch series enables the BPF prog pack allocator for the RISCV BPF JIT.
> This series needs a patch[3] from the ARM64 series to work.
>
> ======================================================
> Performance Analysis of prog pack allocator on RISCV64
> ======================================================
>
> Test setup:
> ===========
>
> Host machine: Debian GNU/Linux 11 (bullseye)
> Qemu Version: QEMU emulator version 8.0.3 (Debian 1:8.0.3+dfsg-1)
> u-boot-qemu Version: 2023.07+dfsg-1
> opensbi Version: 1.3-1
>
> To test the performance of the BPF prog pack allocator on RV, a stresser
> tool[4] linked below was built. This tool loads 8 BPF programs on the system and
> triggers 5 of them in an infinite loop by doing system calls.
>
> The runner script starts 20 instances of the above which loads 8*20=160 BPF
> programs on the system, 5*20=100 of which are being constantly triggered.
> The script is passed a command which would be run in the above environment.
>
> The script was run with following perf command:
> ./run.sh "perf stat -a \
> -e iTLB-load-misses \
> -e dTLB-load-misses \
> -e dTLB-store-misses \
> -e instructions \
> --timeout 60000"
>
> The output of the above command is discussed below before and after enabling the
> BPF prog pack allocator.
>
> The tests were run on qemu-system-riscv64 with 8 cpus, 16G memory. The rootfs
> was created using Bjorn's riscv-cross-builder[5] docker container linked below.
Back in the saddle! Sorry for the horribly late reply...
Did you run the test_progs kselftest test, and passed w/o regressions? I
ran a test without/with your series (plus the patch from the arm64
series that you pointed out), and I'm getting regressions with this
series:
w/o Summary: 318/3114 PASSED, 27 SKIPPED, 60 FAILED
w/ Summary: 299/3026 PASSED, 33 SKIPPED, 79 FAILED
I'm did the test on commit 4c75bf7e4a0e ("Merge tag
'kbuild-fixes-v6.5-2' of
git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild").
I'm re-running, and investigating now.
Björn
More information about the linux-riscv
mailing list