[PATCH v4 5/5] arm64: mte: Inline mte_assign_mem_tag_range()
Andrey Konovalov
andreyknvl at google.com
Tue Jan 19 13:12:40 EST 2021
On Tue, Jan 19, 2021 at 4:45 PM Vincenzo Frascino
<vincenzo.frascino at arm.com> wrote:
>
> Hi Catalin,
>
> On 1/19/21 2:45 PM, Catalin Marinas wrote:
> > On Mon, Jan 18, 2021 at 06:30:33PM +0000, Vincenzo Frascino wrote:
> >> mte_assign_mem_tag_range() is called on production KASAN HW hot
> >> paths. It makes sense to inline it in an attempt to reduce the
> >> overhead.
> >>
> >> Inline mte_assign_mem_tag_range() based on the indications provided at
> >> [1].
> >>
> >> [1] https://lore.kernel.org/r/CAAeHK+wCO+J7D1_T89DG+jJrPLk3X9RsGFKxJGd0ZcUFjQT-9Q@mail.gmail.com/
> >>
> >> Cc: Catalin Marinas <catalin.marinas at arm.com>
> >> Cc: Will Deacon <will at kernel.org>
> >> Signed-off-by: Vincenzo Frascino <vincenzo.frascino at arm.com>
> >> ---
> >> arch/arm64/include/asm/mte.h | 26 +++++++++++++++++++++++++-
> >> arch/arm64/lib/mte.S | 15 ---------------
> >> 2 files changed, 25 insertions(+), 16 deletions(-)
> >>
> >> diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h
> >> index 237bb2f7309d..1a6fd53f82c3 100644
> >> --- a/arch/arm64/include/asm/mte.h
> >> +++ b/arch/arm64/include/asm/mte.h
> >> @@ -49,7 +49,31 @@ long get_mte_ctrl(struct task_struct *task);
> >> int mte_ptrace_copy_tags(struct task_struct *child, long request,
> >> unsigned long addr, unsigned long data);
> >>
> >> -void mte_assign_mem_tag_range(void *addr, size_t size);
> >> +static inline void mte_assign_mem_tag_range(void *addr, size_t size)
> >> +{
> >> + u64 _addr = (u64)addr;
> >> + u64 _end = _addr + size;
> >> +
> >> + /*
> >> + * This function must be invoked from an MTE enabled context.
> >> + *
> >> + * Note: The address must be non-NULL and MTE_GRANULE_SIZE aligned and
> >> + * size must be non-zero and MTE_GRANULE_SIZE aligned.
> >> + */
> >> + do {
> >> + /*
> >> + * 'asm volatile' is required to prevent the compiler to move
> >> + * the statement outside of the loop.
> >> + */
> >> + asm volatile(__MTE_PREAMBLE "stg %0, [%0]"
> >> + :
> >> + : "r" (_addr)
> >> + : "memory");
> >> +
> >> + _addr += MTE_GRANULE_SIZE;
> >> + } while (_addr != _end);
> >> +}
> >
> > While I'm ok with moving this function to C, I don't think it solves the
> > inlining in the kasan code. The only interface we have to kasan is via
> > mte_{set,get}_mem_tag_range(), so the above function doesn't need to
> > live in a header.
> >
> > If you do want inlining all the way to the kasan code, we should
> > probably move the mte_{set,get}_mem_tag_range() functions to the header
> > as well (and ideally backed by some numbers to show that it matters).
> >
> > Moving it to mte.c also gives us more control on how it's called (we
> > have the WARN_ONs in place in the callers).
> >
>
> Based on the thread [1] this patch contains only an intermediate step to allow
> KASAN to call directly mte_assign_mem_tag_range() in future. At that point I
> think that mte_set_mem_tag_range() can be removed.
>
> If you agree, I would live the things like this to give to Andrey a chance to
> execute on the original plan with a separate series.
I think we should drop this patch from this series as it's unrelated.
I will pick it up into my future optimization series. Then it will be
easier to discuss it in the context. The important part that I needed
is an inlinable C implementation of mte_assign_mem_tag_range(), which
I now have with this patch.
Thanks, Vincenzo!
More information about the linux-arm-kernel
mailing list