[PATCH v2] riscv: add irq stack support

Arnd Bergmann arnd at arndb.de
Mon Mar 7 11:19:35 PST 2022


On Mon, Mar 7, 2022 at 3:08 PM Jisheng Zhang <jszhang at kernel.org> wrote:
>
> Currently, IRQs are still handled on the kernel stack of the current
> task on riscv platforms. If the task has a deep call stack at the time
> of interrupt, and handling the interrupt also requires a deep stack,
> it's possible to see stack overflow.
>
> Before this patch, the stack_max_size of a v5.17-rc1 kernel running on
> a lichee RV board gave:
> ~ # cat /sys/kernel/debug/tracing/stack_max_size
> 3736
>
> After this patch,
> ~ # cat /sys/kernel/debug/tracing/stack_max_size
> 3176
>
> We reduce the max kernel stack usage by 560 bytes!
>
> From another side, after this patch, it's possible to reduce the
> THREAD_SIZE to 8KB for RV64 platforms. This is especially useful for
> those systems with small memory size, e.g the Allwinner D1S platform
> which is RV64 but only has 64MB DDR.
>
> Signed-off-by: Jisheng Zhang <jszhang at kernel.org>

Very nice!

> diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S
> index ed29e9c8f660..57c9b64e16a5 100644
> --- a/arch/riscv/kernel/entry.S
> +++ b/arch/riscv/kernel/entry.S
> @@ -126,12 +126,39 @@ skip_context_tracking:
>          */
>         bge s4, zero, 1f
>
> -       la ra, ret_from_exception
> +       /* preserve the sp */
> +       move s0, sp
>
> -       /* Handle interrupts */
>         move a0, sp /* pt_regs */
> +
> +       /*
> +        * Compare sp with the base of the task stack.
> +        * If the top ~(THREAD_SIZE - 1) bits match, we are on a task stack,
> +        * and should switch to the irq stack.
> +        */
> +       REG_L t0, TASK_STACK(tp)
> +       xor t0, t0, s0
> +       li t1, ~(THREAD_SIZE - 1)
> +       and t0, t0, t1
> +       bnez t0, 2f
> +
> +       la t1, irq_stack
> +       REG_L t2, TASK_TI_CPU(tp)
> +       slli t2, t2, RISCV_LGPTR
> +       add t1, t1, t2
> +       REG_L t2, 0(t1)
> +       li t1, IRQ_STACK_SIZE
> +       /* switch to the irq stack */
> +       add sp, t2, t1
> +
> +2:

What is the benefit of doing this in assembler? Is it measurably faster?

I see that arm64 does the same thing in C code, and it would be best to
have a common implementation for doing this, in terms of maintainability.

> +
> +       for_each_possible_cpu(cpu) {
> +#ifdef CONFIG_VMAP_STACK
> +               void *s = __vmalloc_node(IRQ_STACK_SIZE, THREAD_ALIGN,
> +                                        THREADINFO_GFP, cpu_to_node(cpu),
> +                                        __builtin_return_address(0));
> +#else
> +               void *s = (void *)__get_free_pages(GFP_KERNEL, get_order(IRQ_STACK_SIZE));
> +#endif

On a related topic: is there a reason to still keep the non-VMAP_STACK
code path around? I see that it currently is optional for 64-bit with MMU,
but not available otherwise. The benefits should still outweigh the downside
(virtual address space usage mainly) on 32-bit, especially when this allows
a common implementation. Not sure about NOMMU, but I would guess
that it's not a big issue to use the same code there as well, since nommu
vmalloc just turns into a kmalloc as well.

         Arnd



More information about the linux-riscv mailing list