[PATCH v2] arm64/module: Optimize module load time by optimizing PLT counting

Ard Biesheuvel ardb at kernel.org
Sat Jul 4 09:47:52 EDT 2020


On Sat, 4 Jul 2020 at 14:09, Will Deacon <will at kernel.org> wrote:
>
> On Fri, Jul 03, 2020 at 05:47:24PM -0700, Saravana Kannan wrote:
> > On Thu, Jul 2, 2020 at 8:30 AM Ard Biesheuvel <ardb at kernel.org> wrote:
> > > On Tue, 23 Jun 2020 at 03:27, Saravana Kannan <saravanak at google.com> wrote:
> > > > diff --git a/arch/arm64/kernel/module-plts.c b/arch/arm64/kernel/module-plts.c
> > > > index 65b08a74aec6..0ce3a28e3347 100644
> > > > --- a/arch/arm64/kernel/module-plts.c
> > > > +++ b/arch/arm64/kernel/module-plts.c
> > > > @@ -253,6 +253,40 @@ static unsigned int count_plts(Elf64_Sym *syms, Elf64_Rela *rela, int num,
> > > >         return ret;
> > > >  }
> > > >
> > > > +static bool branch_rela_needs_plt(Elf64_Sym *syms, Elf64_Rela *rela,
> > > > +                                 Elf64_Word dstidx)
> > > > +{
> > > > +
> > > > +       Elf64_Sym *s = syms + ELF64_R_SYM(rela->r_info);
> > > > +
> > > > +       if (s->st_shndx == dstidx)
> > > > +               return false;
> > > > +
> > > > +       return ELF64_R_TYPE(rela->r_info) == R_AARCH64_JUMP26 ||
> > > > +              ELF64_R_TYPE(rela->r_info) == R_AARCH64_CALL26;
> > > > +}
> > > > +
> > > > +/* Group branch PLT relas at the front end of the array. */
> > > > +static int partition_branch_plt_relas(Elf64_Sym *syms, Elf64_Rela *rela,
> > > > +                                     int numrels, Elf64_Word dstidx)
> > > > +{
> > > > +       int i = 0, j = numrels - 1;
> > > > +
> > > > +       if (!IS_ENABLED(CONFIG_RANDOMIZE_BASE))
> > > > +               return 0;
> > > > +
> > > > +       while (i < j) {
> > > > +               if (branch_rela_needs_plt(syms, &rela[i], dstidx))
> > > > +                       i++;
> > > > +               else if (branch_rela_needs_plt(syms, &rela[j], dstidx))
> > > > +                       swap(rela[i], rela[j]);
> > >
> > > Nit: would be slightly better to put
> > >
> > >   swap(rela[i++], rela[j]);
> > >
> > > here so the next iteration of the loop will not call
> > > branch_rela_needs_plt() on rela[i] redundantly. But the current code
> > > is also correct.
> >
> > Oh yeah, I noticed that unnecessary repeat of branch_rela_needs_plt()
> > on rela[i] when j had to be decremented, but forgot to handle it after
> > I was done with all the testing.
>
> Yeah, I guess you can decrement j as well, but I just think it makes the
> logic harder to read and more error-prone if we change it later.
>

Indeed,

  swap(rela[i++], rela[j--]);

looks even bettter!

But you're right, it's not a big deal.


> > But I did compare it to the code I had written in v1 that didn't have
> > this extra check for rela[i]. I couldn't find any measurable
> > difference in the module load time. Maybe 1ms for the worst case
> > module, but that could have been just run to run variation.
> >
> > Anyway, maybe send this as another patch since Catalin has already
> > picked this mine?
>
> I think the queued code is fine, so we don't need to micro-optimise it.
>
> Will



More information about the linux-arm-kernel mailing list