[PATCH 3/8] arm64: Remove CONFIG_VMAP_STACK conditionals from THREAD_SHIFT and THREAD_ALIGN

Breno Leitao leitao at debian.org
Mon Jul 7 09:01:03 PDT 2025


Now that VMAP_STACK is always enabled on arm64, remove the
CONFIG_VMAP_STACK conditional logic from the definitions of THREAD_SHIFT
and THREAD_ALIGN in arch/arm64/include/asm/memory.h. This simplifies the
code by unconditionally setting THREAD_ALIGN to (2 * THREAD_SIZE) and
adjusting the THREAD_SHIFT definition to only depend on MIN_THREAD_SHIFT
and PAGE_SHIFT.

This change reflects the updated arm64 stack model, where all kernel
threads use virtually mapped stacks with guard pages, and ensures
alignment and stack sizing are consistently handled.

Signed-off-by: Breno Leitao <leitao at debian.org>
---
 arch/arm64/include/asm/memory.h | 6 +-----
 1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 717829df294e..5213248e081b 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -118,7 +118,7 @@
  * VMAP'd stacks are allocated at page granularity, so we must ensure that such
  * stacks are a multiple of page size.
  */
-#if defined(CONFIG_VMAP_STACK) && (MIN_THREAD_SHIFT < PAGE_SHIFT)
+#if (MIN_THREAD_SHIFT < PAGE_SHIFT)
 #define THREAD_SHIFT		PAGE_SHIFT
 #else
 #define THREAD_SHIFT		MIN_THREAD_SHIFT
@@ -135,11 +135,7 @@
  * checking sp & (1 << THREAD_SHIFT), which we can do cheaply in the entry
  * assembly.
  */
-#ifdef CONFIG_VMAP_STACK
 #define THREAD_ALIGN		(2 * THREAD_SIZE)
-#else
-#define THREAD_ALIGN		THREAD_SIZE
-#endif
 
 #define IRQ_STACK_SIZE		THREAD_SIZE
 

-- 
2.47.1




More information about the linux-arm-kernel mailing list