[PATCH] arm64: fix rodata=full again
Will Deacon
will at kernel.org
Mon Nov 7 07:42:37 PST 2022
On Thu, Nov 03, 2022 at 06:00:15PM +0100, Ard Biesheuvel wrote:
> Commit 2e8cff0a0eee87b2 ("arm64: fix rodata=full") addressed a couple of
> issues with the rodata= kernel command line option, which is not a
> simple boolean on arm64, and inadvertently got broken due to changes in
> the generic bool handling.
>
> Unfortunately, the resulting code never clears the rodata_full boolean
> variable if it defaults to true and rodata=on or rodata=off is passed,
> as the generic code is not aware of the existence of this variable.
>
> Given the way this code is plumbed together, clearing rodata_full when
> returning false from arch_parse_debug_rodata() may result in
> inconsistencies if the generic code decides that it cannot parse the
> right hand side, so the best way to deal with this is to only take
> rodata_full in account if rodata_enabled is also true.
>
> Fixes: 2e8cff0a0eee87b2 ("arm64: fix rodata=full")
> Signed-off-by: Ard Biesheuvel <ardb at kernel.org>
> ---
> arch/arm64/mm/pageattr.c | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> index d107c3d434e22455..5922178d7a064c1c 100644
> --- a/arch/arm64/mm/pageattr.c
> +++ b/arch/arm64/mm/pageattr.c
> @@ -26,7 +26,7 @@ bool can_set_direct_map(void)
> * mapped at page granularity, so that it is possible to
> * protect/unprotect single pages.
> */
> - return rodata_full || debug_pagealloc_enabled() ||
> + return (rodata_enabled && rodata_full) || debug_pagealloc_enabled() ||
> IS_ENABLED(CONFIG_KFENCE);
> }
>
> @@ -102,7 +102,8 @@ static int change_memory_common(unsigned long addr, int numpages,
> * If we are manipulating read-only permissions, apply the same
> * change to the linear mapping of the pages that back this VM area.
> */
> - if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY ||
> + if (rodata_enabled &&
> + rodata_full && (pgprot_val(set_mask) == PTE_RDONLY ||
> pgprot_val(clear_mask) == PTE_RDONLY)) {
> for (i = 0; i < area->nr_pages; i++) {
> __change_memory_common((u64)page_address(area->pages[i]),
> --
> 2.35.1
Hmm, I dislike how error-prone this is, but thanks for the fix:
Acked-by: Will Deacon <will at kernel.org>
Will
More information about the linux-arm-kernel
mailing list