[PATCH] arm64: make irq_stack_ptr more robust

Shi, Yang yang.shi at linaro.org
Fri Feb 12 09:54:27 PST 2016


On 2/12/2016 9:38 AM, Shi, Yang wrote:
> On 2/12/2016 5:47 AM, James Morse wrote:
>> Hi!
>>
>> On 11/02/16 21:53, Yang Shi wrote:
>>> Switching between stacks is only valid if we are tracing ourselves
>>> while on the
>>> irq_stack, so it is only valid when in current and non-preemptible
>>> context,
>>> otherwise is is just zeroed off.
>>
>> Given it was picked up with CONFIG_DEBUG_PREEMPT:
>>
>> Fixes: 132cd887b5c5 ("arm64: Modify stack trace and dump for use with
>> irq_stack")
>
> Wii add in v2.
>
>>
>>
>>> Signed-off-by: Yang Shi <yang.shi at linaro.org>
>>> ---
>>>   arch/arm64/kernel/stacktrace.c | 13 ++++++-------
>>>   arch/arm64/kernel/traps.c      | 11 ++++++++++-
>>>   2 files changed, 16 insertions(+), 8 deletions(-)
>>>
>>> diff --git a/arch/arm64/kernel/stacktrace.c
>>> b/arch/arm64/kernel/stacktrace.c
>>> index 12a18cb..d9751a4 100644
>>> --- a/arch/arm64/kernel/stacktrace.c
>>> +++ b/arch/arm64/kernel/stacktrace.c
>>> @@ -44,14 +44,13 @@ int notrace unwind_frame(struct task_struct *tsk,
>>> struct stackframe *frame)
>>>       unsigned long irq_stack_ptr;
>>>
>>>       /*
>>> -     * Use raw_smp_processor_id() to avoid false-positives from
>>> -     * CONFIG_DEBUG_PREEMPT. get_wchan() calls unwind_frame() on
>>> sleeping
>>> -     * task stacks, we can be pre-empted in this case, so
>>> -     * {raw_,}smp_processor_id() may give us the wrong value. Sleeping
>>> -     * tasks can't ever be on an interrupt stack, so regardless of cpu,
>>> -     * the checks will always fail.
>>> +     * Switching between stacks is valid when tracing current and in
>>> +     * non-preemptible context.
>>>        */
>>> -    irq_stack_ptr = IRQ_STACK_PTR(raw_smp_processor_id());
>>> +    if (tsk == current && !preemptible())
>>> +        irq_stack_ptr = IRQ_STACK_PTR(smp_processor_id());
>>> +    else
>>> +        irq_stack_ptr = 0;
>>>
>>>       low  = frame->sp;
>>>       /* irq stacks are not THREAD_SIZE aligned */
>>> diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
>>> index cbedd72..7d8db3a 100644
>>> --- a/arch/arm64/kernel/traps.c
>>> +++ b/arch/arm64/kernel/traps.c
>>> @@ -146,9 +146,18 @@ static void dump_instr(const char *lvl, struct
>>> pt_regs *regs)
>>>   static void dump_backtrace(struct pt_regs *regs, struct task_struct
>>> *tsk)
>>>   {
>>>       struct stackframe frame;
>>> -    unsigned long irq_stack_ptr = IRQ_STACK_PTR(smp_processor_id());
>>> +    unsigned long irq_stack_ptr;
>>>       int skip;
>>>
>>> +    /*
>>> +     * Switching between  stacks is valid when tracing current and in
>>
>> Nit: Two spaces: "between[ ][ ]stacks"
>
> Will fix in v2.
>
>>
>>
>>> +     * non-preemptible context.
>>> +     */
>>> +    if (tsk == current && !preemptible())
>>> +        irq_stack_ptr = IRQ_STACK_PTR(smp_processor_id());
>>> +    else
>>> +        irq_stack_ptr = 0;
>>> +
>>>       pr_debug("%s(regs = %p tsk = %p)\n", __func__, regs, tsk);
>>>
>>>       if (!tsk)
>>>
>>
>> Neither file includes 'linux/preempt.h' for the definition of
>> preemptible().
>> (I can't talk: I should have included smp.h for smp_processor_id())
>
> I tried to build the kernel with preempt and without preempt, both
> works. And, I saw arch/arm64/include/asm/Kbuild has:
>
> generic-y += preempt.h
>
> So, it sounds preempt.h has been included by default.

In addition, linux/sched.h, which is included by both traps.c and 
stacktrace.c, includes preempt.h already.

Yang

>
> Thanks,
> Yang
>
>>
>>
>> Acked-by: James Morse <james.morse at arm.com>
>> Tested-by: James Morse <james.morse at arm.com>
>>
>>
>> Thanks!
>>
>> James
>>
>>
>




More information about the linux-arm-kernel mailing list