[PATCH v3] arm64/mm: avoid fixmap race condition when create pud mapping

Justin He Justin.He at arm.com
Wed Jan 26 17:31:34 PST 2022


Hi Ard

> -----Original Message-----
> From: Ard Biesheuvel <ardb at kernel.org>
> Sent: Wednesday, January 26, 2022 4:37 PM
> To: Justin He <Justin.He at arm.com>
> Cc: Catalin Marinas <Catalin.Marinas at arm.com>; Jianyong Wu
> <Jianyong.Wu at arm.com>; will at kernel.org; Anshuman Khandual
> <Anshuman.Khandual at arm.com>; akpm at linux-foundation.org; david at redhat.com;
> quic_qiancai at quicinc.com; linux-kernel at vger.kernel.org; linux-arm-
> kernel at lists.infradead.org; gshan at redhat.com; nd <nd at arm.com>
> Subject: Re: [PATCH v3] arm64/mm: avoid fixmap race condition when create
> pud mapping
> 
> On Wed, 26 Jan 2022 at 05:21, Justin He <Justin.He at arm.com> wrote:
> >
> > Hi Catalin
> >
> > > -----Original Message-----
> > > From: Catalin Marinas <catalin.marinas at arm.com>
> > > Sent: Friday, January 7, 2022 6:43 PM
> > > To: Jianyong Wu <Jianyong.Wu at arm.com>
> > > Cc: will at kernel.org; Anshuman Khandual <Anshuman.Khandual at arm.com>;
> > > akpm at linux-foundation.org; david at redhat.com; quic_qiancai at quicinc.com;
> > > ardb at kernel.org; linux-kernel at vger.kernel.org; linux-arm-
> > > kernel at lists.infradead.org; gshan at redhat.com; Justin He
> > > <Justin.He at arm.com>; nd <nd at arm.com>
> > > Subject: Re: [PATCH v3] arm64/mm: avoid fixmap race condition when
> create
> > > pud mapping
> > >
> > > On Fri, Jan 07, 2022 at 09:10:57AM +0000, Jianyong Wu wrote:
> > > > Hi Catalin,
> > > >
> > > > I roughly find the root cause.
> > > >  alloc_init_pud will be called at the very beginning of kernel boot
> in
> > > create_mapping_noalloc where no memory allocator is initialized. But
> > > lockdep check may need allocate memory. So, kernel take exception when
> > > acquire lock.(I have not found the exact code that cause this issue)
> > > that's say we may not be able to use a lock so early.
> > > >
> > > > I come up with 2 methods to address it.
> > > > 1) skip dead lock check at the very beginning of kernel boot in
> lockdep
> > > code.
> > > > 2) provided 2 two versions of __create_pgd_mapping, one with lock in
> > > > it and the other without. There may be no possible of race for
> memory
> > > > mapping at the very beginning time of kernel boot, thus we can use
> the
> > > > no lock version of __create_pgd_mapping safely.
> > > > In my test, this issue is gone if there is no lock held in
> > > > create_mapping_noalloc. I think create_mapping_noalloc is called
> early
> > > > enough to avoid the race conditions of memory mapping, however, I
> have
> > > > not proved it.
> > >
> > > I think method 2 would work better but rather than implementing new
> > > nolock functions I'd add a NO_LOCK flag and check it in
> > > alloc_init_pud() before mutex_lock/unlock. Also add a comment when
> > > passing the NO_LOCK flag on why it's needed and why there wouldn't be
> > > any races at that stage (early boot etc.)
> > >
> > The problematic code path is:
> > __primary_switched
> >         early_fdt_map->fixmap_remap_fdt
> >                 create_mapping_noalloc->alloc_init_pud
> >                         mutex_lock (with Jianyong's patch)
> >
> > The problem seems to be that we will clear BSS segment twice if kaslr
> > is enabled. Hence, some of the static variables in lockdep init process
> were
> > messed up. That is to said, with kaslr enabled we might initialize
> lockdep
> > twice if we add mutex_lock/unlock in alloc_init_pud().
> >
> 
> Thanks for tracking that down.
> 
> Note that clearing the BSS twice is not the root problem here. The
> root problem is that we set global state while the kernel runs at the
> default link time address, and then refer to it again after the entire
> kernel has been shifted in the kernel VA space. Such global state
> could consist of mutable pointers to statically allocated data (which
> would be reset to their default values after the relocation code runs
> again), or global pointer variables in BSS. In either case, relying on
> such a global variable after the second relocation performed by KASLR
> would be risky, and so we should avoid manipulating global state at
> all if it might involve pointer to statically allocated data
> structures.
> 
Thanks for the explanation, which makes root cause clearer.
I have a question off this thread:
Should we avoid to invoke early_fdt_map and init_feature_override twice
with kaslr enabled?

In Commit f6f0c4362f07 ("arm64: Extract early FDT mapping from kaslr
early_init() "), it implicitly invokes early_fdt_map first time before
kaslr is enabled and 2nd time after it.

What to you think of below changes (tested in both guest and host boot):

diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 6a98f1a38c29..3758ac057a6a 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -450,12 +450,12 @@ SYM_FUNC_START_LOCAL(__primary_switched)
 #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
        bl      kasan_early_init
 #endif
-       mov     x0, x21                         // pass FDT address in x0
-       bl      early_fdt_map                   // Try mapping the FDT early
-       bl      init_feature_override           // Parse cpu feature overrides
 #ifdef CONFIG_RANDOMIZE_BASE
        tst     x23, ~(MIN_KIMG_ALIGN - 1)      // already running randomized?
        b.ne    0f
+       mov     x0, x21                         // pass FDT address in x0
+       bl      early_fdt_map                   // Try mapping the FDT early
+       bl      init_feature_override           // Parse cpu feature overrides
        bl      kaslr_early_init                // parse FDT for KASLR options
        cbz     x0, 0f                          // KASLR disabled? just proceed
        orr     x23, x23, x0                    // record KASLR offset


--
Cheers,
Justin (Jia He)




More information about the linux-arm-kernel mailing list