[PATCH] arm64: reserve [_text, _stext) virtual address range

Will Deacon will at kernel.org
Tue Mar 11 07:17:18 PDT 2025


On Tue, Mar 11, 2025 at 02:32:47PM +0100, Ard Biesheuvel wrote:
> On Tue, 11 Mar 2025 at 13:54, Will Deacon <will at kernel.org> wrote:
> >
> > [+Ard]
> >
> > On Mon, Mar 10, 2025 at 01:05:04PM -0700, Omar Sandoval wrote:
> > > From: Omar Sandoval <osandov at fb.com>
> > >
> > > Since the referenced fixes commit, the kernel's .text section is only
> > > mapped starting from _stext; the region [_text, _stext) is omitted. As a
> > > result, other vmalloc/vmap allocations may use the virtual addresses
> > > nominally in the range [_text, _stext). This address reuse confuses
> > > multiple things:
> > >
> > > 1. crash_prepare_elf64_headers() sets up a segment in /proc/vmcore
> > >    mapping the entire range [_text, _end) to
> > >    [__pa_symbol(_text), __pa_symbol(_end)). Reading an address in
> > >    [_text, _stext) from /proc/vmcore therefore gives the incorrect
> > >    result.

[...]

> > > @@ -765,13 +769,17 @@ core_initcall(map_entry_trampoline);
> > >   */
> > >  static void __init declare_kernel_vmas(void)
> > >  {
> > > -     static struct vm_struct vmlinux_seg[KERNEL_SEGMENT_COUNT];
> > > +     static struct vm_struct vmlinux_seg[KERNEL_SEGMENT_COUNT + 1];
> > >
> > > -     declare_vma(&vmlinux_seg[0], _stext, _etext, VM_NO_GUARD);
> > > -     declare_vma(&vmlinux_seg[1], __start_rodata, __inittext_begin, VM_NO_GUARD);
> > > -     declare_vma(&vmlinux_seg[2], __inittext_begin, __inittext_end, VM_NO_GUARD);
> > > -     declare_vma(&vmlinux_seg[3], __initdata_begin, __initdata_end, VM_NO_GUARD);
> > > -     declare_vma(&vmlinux_seg[4], _data, _end, 0);
> > > +     declare_vma(&vmlinux_seg[0], _text, _stext, VM_NO_GUARD);
> >
> > Should we also put the memblock reservation back as it was, so that this
> > region can't be allocated there?
> >
> 
> The issue is about the virtual address space, not the physical memory
> behind it, right? So the VA range should be protected from reuse, but
> nothing needs to be mapped there.

You're absolutely right, but now I'm more confused about the reference
to crash_prepare_elf64_headers() in the commit message. That sets both
the virtual (_text) and the physical (__pa_symbol(_text)) addresses in
the header, so it feels like we really need to keep that memory around
because it's accessible via /proc/vmcore.

> 
> > In fact, if we're not allocating from here, why don't we just map it
> > anyway but without execute permissions?
> >
> 
> It's just 64k so if this is the simplest approach, I won't object.
> 
> I wonder if this needs to be so intrusive, though - there is already a
> precedent of VMAs not actually mapping the entire region they describe
> (with guard pages), and so we might just declare the first VMA as
> [_text, _etext), even though the first 64k of that region is not not
> actually mapped.
> 
> However, if that confuses the bookkeeping or creates other problems,
> declaring a separate VMA to reserve the VA range seems fine, although
> the patch seems a bit intrusive (and I don't even see the whole
> thing).

As above, I think we'll have to give /proc/vmcore the physical address
of _something_.

Will



More information about the linux-arm-kernel mailing list