[PATCH] riscv: Align on L1_CACHE_BYTES when STRICT_KERNEL_RWX
Atish Patra
atishp at atishpatra.org
Tue Nov 17 01:23:02 EST 2020
On Mon, Nov 16, 2020 at 4:58 AM Sebastien Van Cauwenberghe
<svancau at gmail.com> wrote:
>
> From 5690c2f91d87a007babb13e2d2c9c45d1ff68b7a Mon Sep 17 00:00:00 2001
> From: Sebastien Van Cauwenberghe <svancau at gmail.com>
> Date: Mon, 16 Nov 2020 13:37:32 +0100
> Subject: [PATCH] riscv: Align on L1_CACHE_BYTES when STRICT_KERNEL_RWX is
> disabled
>
> Allows the sections to be aligned on smaller boundaries and
> therefore results in a smaller kernel image size.
>
> Signed-off-by: Sebastien Van Cauwenberghe <svancau at gmail.com>
> ---
> arch/riscv/include/asm/set_memory.h | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/arch/riscv/include/asm/set_memory.h b/arch/riscv/include/asm/set_memory.h
> index 4c5bae7ca01c..172e63d942b0 100644
> --- a/arch/riscv/include/asm/set_memory.h
> +++ b/arch/riscv/include/asm/set_memory.h
> @@ -27,14 +27,14 @@ int set_direct_map_default_noflush(struct page *page);
>
> #endif /* __ASSEMBLY__ */
>
> -#ifdef CONFIG_ARCH_HAS_STRICT_KERNEL_RWX
> +#ifdef CONFIG_STRICT_KERNEL_RWX
> #ifdef CONFIG_64BIT
> #define SECTION_ALIGN (1 << 21)
> #else
> #define SECTION_ALIGN (1 << 22)
> #endif
> -#else /* !CONFIG_ARCH_HAS_STRICT_KERNEL_RWX */
> +#else /* !CONFIG_STRICT_KERNEL_RWX */
> #define SECTION_ALIGN L1_CACHE_BYTES
> -#endif /* CONFIG_ARCH_HAS_STRICT_KERNEL_RWX */
> +#endif /* CONFIG_STRICT_KERNEL_RWX */
>
> #endif /* _ASM_RISCV_SET_MEMORY_H */
> --
> 2.28.0
>
>
>
> _______________________________________________
> linux-riscv mailing list
> linux-riscv at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-riscv
Thanks for the fix.
Reviewed-by: Atish Patra <atish.patra at wdc.com>
--
Regards,
Atish
More information about the linux-riscv
mailing list