[PATCH bpf-next v4 2/3] arm64/cfi,bpf: Support kCFI + BPF on arm64
Puranjay Mohan
puranjay12 at gmail.com
Mon May 13 09:39:28 PDT 2024
Maxwell Bland <mbland at motorola.com> writes:
This patch has a subtle difference from the patch that I sent in v2[1]
Unfortunately, you didn't test this. :(
It will break BPF on an ARM64 kernel compiled with CONFIG_CFI_CLANG=y
See below:
> diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
> index 76b91f36c729..703247457409 100644
> --- a/arch/arm64/net/bpf_jit_comp.c
> +++ b/arch/arm64/net/bpf_jit_comp.c
> @@ -17,6 +17,7 @@
> #include <asm/asm-extable.h>
> #include <asm/byteorder.h>
> #include <asm/cacheflush.h>
> +#include <asm/cfi.h>
> #include <asm/debug-monitors.h>
> #include <asm/insn.h>
> #include <asm/patching.h>
> @@ -162,6 +163,12 @@ static inline void emit_bti(u32 insn, struct jit_ctx *ctx)
> emit(insn, ctx);
> }
>
> +static inline void emit_kcfi(u32 hash, struct jit_ctx *ctx)
> +{
> + if (IS_ENABLED(CONFIG_CFI_CLANG))
> + emit(hash, ctx);
> +}
> +
> /*
> * Kernel addresses in the vmalloc space use at most 48 bits, and the
> * remaining bits are guaranteed to be 0x1. So we can compose the address
> @@ -337,6 +344,7 @@ static int build_prologue(struct jit_ctx *ctx, bool ebpf_from_cbpf,
> *
> */
In my original patch the hunk here looked something like:
--- >8 ---
- const int idx0 = ctx->idx;
int cur_offset;
/*
@@ -332,6 +338,8 @@ static int build_prologue(struct jit_ctx *ctx, bool ebpf_from_cbpf,
*
*/
+ emit_kcfi(is_subprog ? cfi_bpf_subprog_hash : cfi_bpf_hash, ctx);
+ const int idx0 = ctx->idx;
--- 8< ---
moving idx0 = ctx->idx; after emit_kcfi() is important because later
this 'idx0' is used like:
cur_offset = ctx->idx - idx0;
if (cur_offset != PROLOGUE_OFFSET) {
pr_err_once("PROLOGUE_OFFSET = %d, expected %d!\n",
cur_offset, PROLOGUE_OFFSET);
return -1;
}
With the current version, when I boot the kernel I get:
[ 0.499207] bpf_jit: PROLOGUE_OFFSET = 13, expected 12!
and now no BPF program can be JITed!
Please fix this in the next version and test it by running:
./tools/testing/selftests/bpf/test_progs
Pay attention to the `rbtree_success` and the `dummy_st_ops` tests, they
are the important ones for this change.
[1] https://lore.kernel.org/all/20240324211518.93892-2-puranjay12@gmail.com/
Thanks,
Puranjay
More information about the linux-arm-kernel
mailing list