[PATCH] arm64/kernel: Always use level 2 or higher for early mappings

Ard Biesheuvel ardb at kernel.org
Tue Mar 11 00:01:37 PDT 2025


On Tue, 11 Mar 2025 at 05:12, Anshuman Khandual
<anshuman.khandual at arm.com> wrote:
>
>
>
> On 3/11/25 00:00, Ard Biesheuvel wrote:
> > From: Ard Biesheuvel <ardb at kernel.org>
> >
> > The page table population code in map_range() uses a recursive algorithm
> > to create the early mappings of the kernel, the DTB and the ID mapped
> > text and data pages, and this fails to take into account that the way
> > these page tables may be constructed is not precisely the same at each
> > level. In particular, block mappings are not permitted at each level,
> > and the code as it exists today might inadvertently create such a
> > forbidden block mapping if it were used to map a region of the
> > appropriate size and alignment.
> >
> > This never happens in practice, given the limited size of the assets
> > being mapped by the early boot code. Nonetheless, it would be better if
> > this code would behave correctly in all circumstances.
> >
> > So only permit block mappings at level 2, and page mappings at level 3,
> > for any page size, and use table mappings exclusively at all other
> > levels. This change should have no impact in practice, but it makes the
> > code more robust.
> >
> > Cc: Anshuman Khandual <anshuman.khandual at arm.com>
> > Reported-by: Ryan Roberts <ryan.roberts at arm.com>
> > Signed-off-by: Ard Biesheuvel <ardb at kernel.org>
> > ---
> >  arch/arm64/kernel/pi/map_range.c | 17 +++++++++--------
> >  1 file changed, 9 insertions(+), 8 deletions(-)
> >
> > diff --git a/arch/arm64/kernel/pi/map_range.c b/arch/arm64/kernel/pi/map_range.c
> > index 2b69e3beeef8..025bb9f0aa0b 100644
> > --- a/arch/arm64/kernel/pi/map_range.c
> > +++ b/arch/arm64/kernel/pi/map_range.c
> > @@ -40,17 +40,10 @@ void __init map_range(u64 *pte, u64 start, u64 end, u64 pa, pgprot_t prot,
> >       /* Advance tbl to the entry that covers start */
> >       tbl += (start >> (lshift + PAGE_SHIFT)) % PTRS_PER_PTE;
> >
> > -     /*
> > -      * Set the right block/page bits for this level unless we are
> > -      * clearing the mapping
> > -      */
> > -     if (protval)
> > -             protval |= (level < 3) ? PMD_TYPE_SECT : PTE_TYPE_PAGE;
> > -
> >       while (start < end) {
> >               u64 next = min((start | lmask) + 1, PAGE_ALIGN(end));
> >
> > -             if (level < 3 && (start | next | pa) & lmask) {
> > +             if (level < 2 || (level == 2 && (start | next | pa) & lmask)) {
> >                       /*
> >                        * This chunk needs a finer grained mapping. Create a
> >                        * table mapping if necessary and recurse.
> > @@ -64,6 +57,14 @@ void __init map_range(u64 *pte, u64 start, u64 end, u64 pa, pgprot_t prot,
> >                                 (pte_t *)(__pte_to_phys(*tbl) + va_offset),
> >                                 may_use_cont, va_offset);
> >               } else {
> > +                     /*
> > +                      * Set the right block/page bits for this level unless
> > +                      * we are clearing the mapping
> > +                      */
> > +                     if (protval)
> > +                             protval |= (level == 2) ? PMD_TYPE_SECT
> > +                                                     : PTE_TYPE_PAGE;
> > +
> >                       /*
> >                        * Start a contiguous range if start and pa are
> >                        * suitably aligned
>
> LGTM and as Ryan mentioned, D128 ready as well !
>
> Reviewed-by: Anshuman Khandual <anshuman.khandual at arm.com>

Thanks, both, but the patch is actually buggy so I'll need to respin this.



More information about the linux-arm-kernel mailing list