[PATCH RESEND bpf-next v8 3/3] selftests/bpf: add testcase for TRACING with 6+ arguments

Menglong Dong menglong8.dong at gmail.com
Tue Jul 11 20:59:49 PDT 2023


On Wed, Jul 12, 2023 at 7:25 AM Alexei Starovoitov
<alexei.starovoitov at gmail.com> wrote:
>
> On Mon, Jul 10, 2023 at 06:48:34PM +0800, menglong8.dong at gmail.com wrote:
> > From: Menglong Dong <imagedong at tencent.com>
> >
> > Add fentry_many_args.c and fexit_many_args.c to test the fentry/fexit
> > with 7/11 arguments. As this feature is not supported by arm64 yet, we
> > disable these testcases for arm64 in DENYLIST.aarch64. We can combine
> > them with fentry_test.c/fexit_test.c when arm64 is supported too.
> >
> > Correspondingly, add bpf_testmod_fentry_test7() and
> > bpf_testmod_fentry_test11() to bpf_testmod.c
> >
> > Meanwhile, add bpf_modify_return_test2() to test_run.c to test the
> > MODIFY_RETURN with 7 arguments.
> >
> > Add bpf_testmod_test_struct_arg_7/bpf_testmod_test_struct_arg_7 in
> > bpf_testmod.c to test the struct in the arguments.
> >
> > And the testcases passed on x86_64:
> >
> > ./test_progs -t fexit
> > Summary: 5/14 PASSED, 0 SKIPPED, 0 FAILED
> >
> > ./test_progs -t fentry
> > Summary: 3/2 PASSED, 0 SKIPPED, 0 FAILED
> >
> > ./test_progs -t modify_return
> > Summary: 1/0 PASSED, 0 SKIPPED, 0 FAILED
> >
> > ./test_progs -t tracing_struct
> > Summary: 1/0 PASSED, 0 SKIPPED, 0 FAILED
> >
> > Signed-off-by: Menglong Dong <imagedong at tencent.com>
> > Acked-by: Yonghong Song <yhs at fb.com>
> > ---
> > v8:
> > - split the testcases, and add fentry_many_args/fexit_many_args to
> >   DENYLIST.aarch64
> > v6:
> > - add testcases to tracing_struct.c instead of fentry_test.c and
> >   fexit_test.c
> > v5:
> > - add testcases for MODIFY_RETURN
> > v4:
> > - use different type for args in bpf_testmod_fentry_test{7,12}
> > - add testcase for grabage values in ctx
> > v3:
> > - move bpf_fentry_test{7,12} to bpf_testmod.c and rename them to
> >   bpf_testmod_fentry_test{7,12} meanwhile
> > - get return value by bpf_get_func_ret() in
> >   "fexit/bpf_testmod_fentry_test12", as we don't change ___bpf_ctx_cast()
> >   in this version
> > ---
> >  net/bpf/test_run.c                            | 23 ++++++--
> >  tools/testing/selftests/bpf/DENYLIST.aarch64  |  2 +
> >  .../selftests/bpf/bpf_testmod/bpf_testmod.c   | 49 ++++++++++++++++-
> >  .../selftests/bpf/prog_tests/fentry_test.c    | 43 +++++++++++++--
> >  .../selftests/bpf/prog_tests/fexit_test.c     | 43 +++++++++++++--
> >  .../selftests/bpf/prog_tests/modify_return.c  | 20 ++++++-
> >  .../selftests/bpf/prog_tests/tracing_struct.c | 19 +++++++
> >  .../selftests/bpf/progs/fentry_many_args.c    | 39 ++++++++++++++
> >  .../selftests/bpf/progs/fexit_many_args.c     | 40 ++++++++++++++
> >  .../selftests/bpf/progs/modify_return.c       | 40 ++++++++++++++
> >  .../selftests/bpf/progs/tracing_struct.c      | 54 +++++++++++++++++++
> >  11 files changed, 358 insertions(+), 14 deletions(-)
> >  create mode 100644 tools/testing/selftests/bpf/progs/fentry_many_args.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/fexit_many_args.c
> >
> > diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
> > index 63b11f7a5392..1c59fa60077b 100644
> > --- a/net/bpf/test_run.c
> > +++ b/net/bpf/test_run.c
> > @@ -565,6 +565,13 @@ __bpf_kfunc int bpf_modify_return_test(int a, int *b)
> >       return a + *b;
> >  }
> >
> > +__bpf_kfunc int bpf_modify_return_test2(int a, int *b, short c, int d,
> > +                                     void *e, char f, int g)
> > +{
> > +     *b += 1;
> > +     return a + *b + c + d + (long)e + f + g;
> > +}
> > +
> >  int noinline bpf_fentry_shadow_test(int a)
> >  {
> >       return a + 1;
> > @@ -600,9 +607,13 @@ __diag_pop();
> >
> >  BTF_SET8_START(bpf_test_modify_return_ids)
> >  BTF_ID_FLAGS(func, bpf_modify_return_test)
> > +BTF_ID_FLAGS(func, bpf_modify_return_test2)
> >  BTF_ID_FLAGS(func, bpf_fentry_test1, KF_SLEEPABLE)
> >  BTF_SET8_END(bpf_test_modify_return_ids)
> >
> > +BTF_ID_LIST(bpf_modify_return_test_id)
> > +BTF_ID(func, bpf_modify_return_test)
> > +
> >  static const struct btf_kfunc_id_set bpf_test_modify_return_set = {
> >       .owner = THIS_MODULE,
> >       .set   = &bpf_test_modify_return_ids,
> > @@ -665,9 +676,15 @@ int bpf_prog_test_run_tracing(struct bpf_prog *prog,
> >                       goto out;
> >               break;
> >       case BPF_MODIFY_RETURN:
> > -             ret = bpf_modify_return_test(1, &b);
> > -             if (b != 2)
> > -                     side_effect = 1;
> > +             if (prog->aux->attach_btf_id == *bpf_modify_return_test_id) {
> > +                     ret = bpf_modify_return_test(1, &b);
> > +                     if (b != 2)
> > +                             side_effect = 1;
> > +             } else {
> > +                     ret = bpf_modify_return_test2(1, &b, 3, 4, (void *)5, 6, 7);
> > +                     if (b != 2)
> > +                             side_effect = 1;
>
> Patches 1 and 2 look good, but I don't like where this check will lead us:
> attach_btf_id == *bpf_modify_return_test_id...
>

Yeah, I don't like it either, which makes the code weak.

> When Jiri did a conversion of all test func into bpf_testmod.ko I forgot
> why we couldn't move fmod_ret tests as well.
> Whatever it was the extra attach_btf_id check will make it worse.
>

I think it's because the side effect can't be verified
by the BPF program, which makes it have to be run by
bpf_prog_test_run_opts().

> For now please think of a way to test fmod_ret when bpf_prog_test_run_tracing()
> does something unconditional like:
>         ret = bpf_modify_return_test(1, &b);
>         if (b != 2)
>                 side_effect++;
>         ret = bpf_modify_return_test2(1, &b, 3, 4, (void *)5, 6, 7);

Should it be like this?

ret += bpf_modify_return_test2(1, &b, 3, 4, (void *)5, 6, 7);

Or the return of bpf_modify_return_test() can't be verified.

>         if (b != 2)
>                 side_effect++;

Thanks!
Menglong Dong



More information about the linux-arm-kernel mailing list