[PATCH bpf-next v3 2/2] bpf, arm64: calculate offset as byte-offset for bpf line info
Hou Tao
houtao1 at huawei.com
Mon Feb 21 19:58:37 PST 2022
Hi,
On 2/19/2022 7:20 AM, Daniel Borkmann wrote:
> On 2/8/22 2:25 AM, Hou Tao wrote:
>> insn_to_jit_off passed to bpf_prog_fill_jited_linfo() is calculated
>> in instruction granularity instead of bytes granularity, but bpf
>> line info requires byte offset, so fixing it by calculating ctx->offset
>> as byte-offset. bpf2a64_offset() needs to return relative instruction
>> offset by using ctx->offfset, so update it accordingly.
>>
>> Fixes: 37ab566c178d ("bpf: arm64: Enable arm64 jit to provide bpf_line_info")
>> Signed-off-by: Hou Tao <houtao1 at huawei.com>
>> ---
>> arch/arm64/net/bpf_jit_comp.c | 16 +++++++++++-----
>> 1 file changed, 11 insertions(+), 5 deletions(-)
>>
[snip]
>> if (ret)
>> @@ -1105,7 +1111,7 @@ static int build_body(struct jit_ctx *ctx, bool
>> extra_pass)
>> * instruction (end of program)
>> */
>> if (ctx->image == NULL)
>> - ctx->offset[i] = ctx->idx;
>> + ctx->offset[i] = ctx->idx * AARCH64_INSN_SIZE;
>
> Both patches look good to me. For this one specifically, given bpf2a64_offset()
> needs to return relative instruction offset via ctx->offfset, can't we just
> simplify it like this w/o the AARCH64_INSN_SIZE back/forth dance?
>
> diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
> index 74f9a9b6a053..72f4702a9d01 100644
> --- a/arch/arm64/net/bpf_jit_comp.c
> +++ b/arch/arm64/net/bpf_jit_comp.c
> @@ -999,7 +999,7 @@ struct arm64_jit_data {
>
> struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
> {
> - int image_size, prog_size, extable_size;
> + int image_size, prog_size, extable_size, i;
> struct bpf_prog *tmp, *orig_prog = prog;
> struct bpf_binary_header *header;
> struct arm64_jit_data *jit_data;
> @@ -1130,6 +1130,9 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
> prog->jited_len = prog_size;
>
> if (!prog->is_func || extra_pass) {
> + /* BPF line info needs byte-offset instead of insn-offset. */
> + for (i = 0; i < prog->len + 1; i++)
> + ctx.offset[i] *= AARCH64_INSN_SIZE;
> bpf_prog_fill_jited_linfo(prog, ctx.offset + 1);
> out_off:
> kfree(ctx.offset);
The fix is much simpler. I will check whether or not it works.
Thanks,
Tao
More information about the linux-arm-kernel
mailing list