[PATCH v2 01/10] arm64: Move the zero page to rodata

Ryan Roberts ryan.roberts at arm.com
Tue Jan 27 01:34:13 PST 2026


On 26/01/2026 09:26, Ard Biesheuvel wrote:
> From: Ard Biesheuvel <ardb at kernel.org>
> 
> The zero page should contain only zero bytes, and so mapping it
> read-write is unnecessary. Combine it with reserved_pg_dir, which lives
> in the read-only region of the kernel, and already serves a similar
> purpose.
> 
> Signed-off-by: Ard Biesheuvel <ardb at kernel.org>
> ---
>  arch/arm64/kernel/vmlinux.lds.S | 1 +
>  arch/arm64/mm/mmu.c             | 3 +--
>  2 files changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
> index ad6133b89e7a..b2a093f5b3fc 100644
> --- a/arch/arm64/kernel/vmlinux.lds.S
> +++ b/arch/arm64/kernel/vmlinux.lds.S
> @@ -229,6 +229,7 @@ SECTIONS
>  #endif
>  
>  	reserved_pg_dir = .;
> +	empty_zero_page = .;
>  	. += PAGE_SIZE;
>  
>  	swapper_pg_dir = .;

Isn't there a magic macro for getting from swapper to reserved? That will need
updating?

/*
 *  Open-coded (swapper_pg_dir - reserved_pg_dir) as this cannot be calculated
 *  until link time.
 */
#define RESERVED_SWAPPER_OFFSET	(PAGE_SIZE)


> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 9ae7ce00a7ef..c36422a3fae2 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -66,9 +66,8 @@ long __section(".mmuoff.data.write") __early_cpu_boot_status;
>  
>  /*
>   * Empty_zero_page is a special page that is used for zero-initialized data
> - * and COW.
> + * and COW. Defined in the linker script.
>   */
> -unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)] __page_aligned_bss;
>  EXPORT_SYMBOL(empty_zero_page);

What's the benefit of giving it it's own place in the linker script vs just
declaring it as const and having it placed in the rodata?

Thanks,
Ryan

>  
>  static DEFINE_SPINLOCK(swapper_pgdir_lock);




More information about the linux-arm-kernel mailing list