[PATCH bpf-next v3 2/2] bpf, arm64: calculate offset as byte-offset for bpf line info
Daniel Borkmann
daniel at iogearbox.net
Fri Feb 18 15:20:58 PST 2022
On 2/8/22 2:25 AM, Hou Tao wrote:
> insn_to_jit_off passed to bpf_prog_fill_jited_linfo() is calculated
> in instruction granularity instead of bytes granularity, but bpf
> line info requires byte offset, so fixing it by calculating ctx->offset
> as byte-offset. bpf2a64_offset() needs to return relative instruction
> offset by using ctx->offfset, so update it accordingly.
>
> Fixes: 37ab566c178d ("bpf: arm64: Enable arm64 jit to provide bpf_line_info")
> Signed-off-by: Hou Tao <houtao1 at huawei.com>
> ---
> arch/arm64/net/bpf_jit_comp.c | 16 +++++++++++-----
> 1 file changed, 11 insertions(+), 5 deletions(-)
>
> diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
> index 68b35c83e637..aed07cba78ec 100644
> --- a/arch/arm64/net/bpf_jit_comp.c
> +++ b/arch/arm64/net/bpf_jit_comp.c
> @@ -164,9 +164,14 @@ static inline int bpf2a64_offset(int bpf_insn, int off,
> /*
> * Whereas arm64 branch instructions encode the offset
> * from the branch itself, so we must subtract 1 from the
> - * instruction offset.
> + * instruction offset. The unit of ctx->offset is byte, so
> + * subtract AARCH64_INSN_SIZE from it. bpf2a64_offset()
> + * returns instruction offset, so divide by AARCH64_INSN_SIZE
> + * at the end.
> */
> - return ctx->offset[bpf_insn + off] - (ctx->offset[bpf_insn] - 1);
> + return (ctx->offset[bpf_insn + off] -
> + (ctx->offset[bpf_insn] - AARCH64_INSN_SIZE)) /
> + AARCH64_INSN_SIZE;
> }
>
> static void jit_fill_hole(void *area, unsigned int size)
> @@ -1087,13 +1092,14 @@ static int build_body(struct jit_ctx *ctx, bool extra_pass)
> const struct bpf_insn *insn = &prog->insnsi[i];
> int ret;
>
> + /* BPF line info needs byte-offset instead of insn-offset */
> if (ctx->image == NULL)
> - ctx->offset[i] = ctx->idx;
> + ctx->offset[i] = ctx->idx * AARCH64_INSN_SIZE;
> ret = build_insn(insn, ctx, extra_pass);
> if (ret > 0) {
> i++;
> if (ctx->image == NULL)
> - ctx->offset[i] = ctx->idx;
> + ctx->offset[i] = ctx->idx * AARCH64_INSN_SIZE;
> continue;
> }
> if (ret)
> @@ -1105,7 +1111,7 @@ static int build_body(struct jit_ctx *ctx, bool extra_pass)
> * instruction (end of program)
> */
> if (ctx->image == NULL)
> - ctx->offset[i] = ctx->idx;
> + ctx->offset[i] = ctx->idx * AARCH64_INSN_SIZE;
Both patches look good to me. For this one specifically, given bpf2a64_offset()
needs to return relative instruction offset via ctx->offfset, can't we just
simplify it like this w/o the AARCH64_INSN_SIZE back/forth dance?
diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
index 74f9a9b6a053..72f4702a9d01 100644
--- a/arch/arm64/net/bpf_jit_comp.c
+++ b/arch/arm64/net/bpf_jit_comp.c
@@ -999,7 +999,7 @@ struct arm64_jit_data {
struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
{
- int image_size, prog_size, extable_size;
+ int image_size, prog_size, extable_size, i;
struct bpf_prog *tmp, *orig_prog = prog;
struct bpf_binary_header *header;
struct arm64_jit_data *jit_data;
@@ -1130,6 +1130,9 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
prog->jited_len = prog_size;
if (!prog->is_func || extra_pass) {
+ /* BPF line info needs byte-offset instead of insn-offset. */
+ for (i = 0; i < prog->len + 1; i++)
+ ctx.offset[i] *= AARCH64_INSN_SIZE;
bpf_prog_fill_jited_linfo(prog, ctx.offset + 1);
out_off:
kfree(ctx.offset);
--
2.21.0
More information about the linux-arm-kernel
mailing list