[PATCH v8] mm,kfence: decouple kfence from page granularity mapping judgement
Pavan Kondeti
quic_pkondeti at quicinc.com
Tue Mar 14 04:14:22 PDT 2023
On Tue, Mar 14, 2023 at 06:08:07PM +0800, Zhenhua Huang wrote:
>
>
> On 2023/3/14 16:36, Pavan Kondeti wrote:
> > On Tue, Mar 14, 2023 at 03:05:02PM +0800, Zhenhua Huang wrote:
> > > Kfence only needs its pool to be mapped as page granularity, if it is
> > > inited early. Previous judgement was a bit over protected. From [1], Mark
> > > suggested to "just map the KFENCE region a page granularity". So I
> > > decouple it from judgement and do page granularity mapping for kfence
> > > pool only. Need to be noticed that late init of kfence pool still requires
> > > page granularity mapping.
> > >
> > > Page granularity mapping in theory cost more(2M per 1GB) memory on arm64
> > > platform. Like what I've tested on QEMU(emulated 1GB RAM) with
> > > gki_defconfig, also turning off rodata protection:
> > > Before:
> > > [root at liebao ]# cat /proc/meminfo
> > > MemTotal: 999484 kB
> > > After:
> > > [root at liebao ]# cat /proc/meminfo
> > > MemTotal: 1001480 kB
> > >
> > > To implement this, also relocate the kfence pool allocation before the
> > > linear mapping setting up, arm64_kfence_alloc_pool is to allocate phys
> > > addr, __kfence_pool is to be set after linear mapping set up.
> > >
> > > LINK: [1] https://lore.kernel.org/linux-arm-kernel/Y+IsdrvDNILA59UN@FVFF77S0Q05N/
> > > Suggested-by: Mark Rutland <mark.rutland at arm.com>
> > > Signed-off-by: Zhenhua Huang <quic_zhenhuah at quicinc.com>
> > > ---
> > > arch/arm64/include/asm/kfence.h | 2 ++
> > > arch/arm64/mm/mmu.c | 44 +++++++++++++++++++++++++++++++++++++++++
> > > arch/arm64/mm/pageattr.c | 9 +++++++--
> > > include/linux/kfence.h | 8 ++++++++
> > > mm/kfence/core.c | 9 +++++++++
> > > 5 files changed, 70 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/arch/arm64/include/asm/kfence.h b/arch/arm64/include/asm/kfence.h
> > > index aa855c6..f1f9ca2d 100644
> > > --- a/arch/arm64/include/asm/kfence.h
> > > +++ b/arch/arm64/include/asm/kfence.h
> > > @@ -10,6 +10,8 @@
> > > #include <asm/set_memory.h>
> > > +extern phys_addr_t early_kfence_pool;
> > > +
> > > static inline bool arch_kfence_init_pool(void) { return true; }
> > > static inline bool kfence_protect_page(unsigned long addr, bool protect)
> > > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> > > index 6f9d889..7fbf2ed 100644
> > > --- a/arch/arm64/mm/mmu.c
> > > +++ b/arch/arm64/mm/mmu.c
> > > @@ -24,6 +24,7 @@
> > > #include <linux/mm.h>
> > > #include <linux/vmalloc.h>
> > > #include <linux/set_memory.h>
> > > +#include <linux/kfence.h>
> > > #include <asm/barrier.h>
> > > #include <asm/cputype.h>
> > > @@ -38,6 +39,7 @@
> > > #include <asm/ptdump.h>
> > > #include <asm/tlbflush.h>
> > > #include <asm/pgalloc.h>
> > > +#include <asm/kfence.h>
> > > #define NO_BLOCK_MAPPINGS BIT(0)
> > > #define NO_CONT_MAPPINGS BIT(1)
> > > @@ -525,6 +527,33 @@ static int __init enable_crash_mem_map(char *arg)
> > > }
> > > early_param("crashkernel", enable_crash_mem_map);
> > > +#ifdef CONFIG_KFENCE
> > > +
> > > +static phys_addr_t arm64_kfence_alloc_pool(void)
> > > +{
> > > + phys_addr_t kfence_pool;
> > > +
> > > + if (!kfence_sample_interval)
> > > + return 0;
> > > +
> >
> > Are you sure that kernel commandline param are processed this early?
> > AFAICS, start_kernel()->parse_args() process the kernel arguments. We
> > are here before that. without your patch, mm_init() which takes care of
> > allocating kfence memory is called after parse_args().
> >
> > Can you check your patch with kfence.sample_interval=0 appended to
> > kernel commandline?
> >
>
> Thanks Pavan. I have tried and you're correct. Previously I thought it's
> parsed by the way:
> setup_arch()->parse_early_param(earlier)->parse_early_options->
> do_early_param
> Unfortunately seems not take effect.
>
> Then the only way left is we always allocate the kfence pool early? as we
> can't get sample_invertal at this early stage.
>
That would mean, we would allocate the kfence pool memory even when it
is disabled from commandline. That does not sound good to me.
Is it possible to free this early allocated memory later in
mm_init()->kfence_alloc_pool()? if that is not possible, can we think of
adding early param for kfence?
> > > + kfence_pool = memblock_phys_alloc(KFENCE_POOL_SIZE, PAGE_SIZE);
> > > + if (!kfence_pool)
> > > + pr_err("failed to allocate kfence pool\n");
> > > +
> > For whatever reason, if this allocation fails, what should be done? We
> > end up not calling kfence_set_pool(). kfence_alloc_pool() is going to
> > attempt allocation again but we did not setup page granularity. That
> > means, we are enabling KFENCE without meeting pre-conditions. Can you
> > check this?
>
> In this scenario, early_kfence_pool should be false(0) and we will end up
> using page granularity mapping? should be fine IMO.
>
Right, I missed that hunk in can_set_direct_map().
Thanks,
Pavan
More information about the linux-arm-kernel
mailing list