[PATCH v4 13/15] arm64: mm: Unmap kernel data/bss entirely from the linear map

Kevin Brodsky kevin.brodsky at arm.com
Mon May 4 01:52:03 PDT 2026


On 29/04/2026 19:37, Ard Biesheuvel wrote:
> On Wed, 29 Apr 2026, at 15:55, Kevin Brodsky wrote:
>> On 27/04/2026 17:34, Ard Biesheuvel wrote:
>>> From: Ard Biesheuvel <ardb at kernel.org>
>>>
>>> The linear aliases of the kernel text and rodata are mapped read-only in
>>> the linear map as well. Given that the contents of these regions are
>>> mostly identical to the version in the loadable image, mapping them
>>> read-only and leaving their contents visible is a reasonable hardening
>>> measure.
>>>
>>> Data and bss, however, are now also mapped read-only but the contents of
>>> these regions are more likely to contain data that we'd rather not leak.
>> That sounds like a good rationale but I wonder, is there anything
>> stopping us from unmapping text/rodata as well?
>>
> There is the zero page now, which may be accessed via
> 'page_address(ZERO_PAGE(0))'. Also, anything that dereferences page tables
> (like /sys/kernel/debug/kernel_page_tables) will expect to have read-only
> access to swapper_pg_dir.

Isn't swapper_pg_dir always accessed via the kernel mapping? If the zero
page is the only data that actually needs to be accessed via the linear
map, maybe we could move it alongside fixmap_pgdir so that we can unmap
everything else from the linear map?

>>> So let's unmap these entirely in the linear map when the kernel is
>>> running normally.
>>>
>>> When going into hibernation or waking up from it, these regions need to
>>> be mapped, so map the region initially, and toggle the valid bit so
>>> map/unmap the region as needed.
>> Doesn't safe_copy_page() already handle that? I suppose this is an
>> optimisation to avoid modifying the linear map for every page, but if so
>> it would be good to spell it out.
>>
> Uhm, good question.
>
> When hibernate was first implemented for arm64, we had to bring back the
> linear alias of the kernel image, and when I started working on this, I
> hadn't realised that we have safe_copy_page() now which should take care
> of this even if the linear alias is invalid.
>
> However, if I remove this handling, things breaks mysteriously, and it
> is a bit tricky to debug so it may take me some time to answer this
> question. In any case, I will address this in the next revision, and
> put you on cc.

Sounds good, thanks!

>>> [...]
>>>  
>>>  #ifdef CONFIG_KFENCE
>>> @@ -1162,7 +1198,7 @@ static void __init map_mem(void)
>>>  
>>>  	/* Map the kernel data/bss so it can be remapped later */
>>>  	__map_memblock(init_end, kernel_end, pgprot_tagged(PAGE_KERNEL),
>>> -		       flags);
>>> +		       flags | NO_BLOCK_MAPPINGS);
>> Might be an obvious question but why do we need this?
>>
> set_memory_valid() only works on regions that are mapped down to pages.

AFAIU since [1] this is no longer the case. Even if we don't have
BBML2-noabort, we should be able to modify a block-mapped region, as
long as we're not splitting any block (which should not happen here
since we're always changing permissions on the same range).

- Kevin

[1]
https://lore.kernel.org/all/20250917190323.3828347-1-yang@os.amperecomputing.com/




More information about the linux-arm-kernel mailing list