am335x: 5.18.x: system stalling

Arnd Bergmann arnd at arndb.de
Fri May 27 05:53:50 PDT 2022


On Fri, May 27, 2022 at 11:50 AM Yegor Yefremov
<yegorslists at googlemail.com> wrote:
>
> # zcat /proc/config.gz | grep 'CONFIG_ARCH_MULTI_V6\|CONFIG_SMP'
> # CONFIG_ARCH_MULTI_V6 is not set
> CONFIG_ARCH_MULTI_V6_V7=y
> CONFIG_SMP=y
> CONFIG_SMP_ON_UP=y
>
> No stalls.
>
> # zcat /proc/config.gz | grep 'CONFIG_ARCH_MULTI_V6\|CONFIG_SMP\|ARCH_OMAP2'
> CONFIG_ARCH_MULTI_V6=y
> CONFIG_ARCH_MULTI_V6_V7=y
> CONFIG_ARCH_OMAP2=y
> CONFIG_ARCH_OMAP2PLUS=y
> CONFIG_ARCH_OMAP2PLUS_TYPICAL=y
>
> No stalls.
>
> As soon as I enable both SMP and OMAP2 options the system stalls.

Ok, that points to the SMP patching for percpu data, which doesn't happen
before the patch, and which is different between loadable modules and
the normal code.

Can you try out this patch to short-circuit the logic and always return
the offset for CPU 0? This is obviously broken on SMP machines but
would get around the bit of code that is V6+SMP specific.

        Arnd

diff --git a/arch/arm/include/asm/percpu.h b/arch/arm/include/asm/percpu.h
index 7545c87c251f..3057c5fef970 100644
--- a/arch/arm/include/asm/percpu.h
+++ b/arch/arm/include/asm/percpu.h
@@ -25,10 +25,13 @@ static inline void set_my_cpu_offset(unsigned long off)
        asm volatile("mcr p15, 0, %0, c13, c0, 4" : : "r" (off) : "memory");
 }

+extern unsigned long __per_cpu_offset[];
 static __always_inline unsigned long __my_cpu_offset(void)
 {
        unsigned long off;

+       return __per_cpu_offset[0];
+
        /*
         * Read TPIDRPRW.
         * We want to allow caching the value, so avoid using volatile and



More information about the linux-arm-kernel mailing list