Out-of-bounds access when hartid >= NR_CPUS

Anup Patel anup at brainfault.org
Thu Oct 28 09:21:31 PDT 2021


On Thu, Oct 28, 2021 at 8:39 PM Palmer Dabbelt <palmer at dabbelt.com> wrote:
>
> On Wed, 27 Oct 2021 16:34:15 PDT (-0700), atishp at atishpatra.org wrote:
> > On Tue, Oct 26, 2021 at 2:34 AM Heiko Stübner <heiko at sntech.de> wrote:
> >>
> >> Am Dienstag, 26. Oktober 2021, 10:57:26 CEST schrieb Geert Uytterhoeven:
> >> > Hi Heiko,
> >> >
> >> > On Tue, Oct 26, 2021 at 10:53 AM Heiko Stübner <heiko at sntech.de> wrote:
> >> > > Am Dienstag, 26. Oktober 2021, 08:44:31 CEST schrieb Geert Uytterhoeven:
> >> > > > On Tue, Oct 26, 2021 at 2:37 AM Ron Economos <re at w6rz.net> wrote:
> >> > > > > On 10/25/21 8:54 AM, Geert Uytterhoeven wrote:
> >> > > > > > When booting a kernel with CONFIG_NR_CPUS=4 on Microchip PolarFire,
> >> > > > > > the 4th CPU either fails to come online, or the system crashes.
> >> > > > > >
> >> > > > > > This happens because PolarFire has 5 CPU cores: hart 0 is an e51,
> >> > > > > > and harts 1-4 are u54s, with the latter becoming CPUs 0-3 in Linux:
> >> > > > > >    - unused core has hartid 0 (sifive,e51),
> >> > > > > >    - processor 0 has hartid 1 (sifive,u74-mc),
> >> > > > > >    - processor 1 has hartid 2 (sifive,u74-mc),
> >> > > > > >    - processor 2 has hartid 3 (sifive,u74-mc),
> >> > > > > >    - processor 3 has hartid 4 (sifive,u74-mc).
> >> > > > > >
> >> > > > > > I assume the same issue is present on the SiFive fu540 and fu740
> >> > > > > > SoCs, but I don't have access to these.  The issue is not present
> >> > > > > > on StarFive JH7100, as processor 0 has hartid 1, and processor 1 has
> >> > > > > > hartid 0.
> >> > > > > >
> >> > > > > > arch/riscv/kernel/cpu_ops.c has:
> >> > > > > >
> >> > > > > >      void *__cpu_up_stack_pointer[NR_CPUS] __section(".data");
> >> > > > > >      void *__cpu_up_task_pointer[NR_CPUS] __section(".data");
> >> > > > > >
> >> > > > > >      void cpu_update_secondary_bootdata(unsigned int cpuid,
> >> > > > > >                                         struct task_struct *tidle)
> >> > > > > >      {
> >> > > > > >              int hartid = cpuid_to_hartid_map(cpuid);
> >> > > > > >
> >> > > > > >              /* Make sure tidle is updated */
> >> > > > > >              smp_mb();
> >> > > > > >              WRITE_ONCE(__cpu_up_stack_pointer[hartid],
> >> > > > > >                         task_stack_page(tidle) + THREAD_SIZE);
> >> > > > > >              WRITE_ONCE(__cpu_up_task_pointer[hartid], tidle);
> >> > > > > >
> >> > > > > > The above two writes cause out-of-bound accesses beyond
> >> > > > > > __cpu_up_{stack,pointer}_pointer[] if hartid >= CONFIG_NR_CPUS.
> >> > > > > >
> >> > > > > >      }
> >> >
> >> > > > https://riscv.org/wp-content/uploads/2017/05/riscv-privileged-v1.10.pdf
> >> > > > says:
> >> > > >
> >> > > >     Hart IDs might not necessarily be numbered contiguously in a
> >> > > >     multiprocessor system, but at least one hart must have a hart
> >> > > >     ID of zero.
> >> > > >
> >> > > > Which means indexing arrays by hart ID is a no-go?
> >> > >
> >> > > Isn't that also similar on aarch64?
> >> > >
> >> > > On a rk3399 you get 0-3 and 100-101 and with the paragraph above
> >> > > something like this could very well exist on some riscv cpu too I guess.
> >> >
> >> > Yes, it looks like hart IDs are similar to MPIDRs on ARM.
> >>
> >> and they have the set_cpu_logical_map construct to map hwids
> >> to a continuous list of cpu-ids.
> >>
> >> So with hartids not being necessarily continuous this looks like
> >> riscv would need a similar mechanism.
> >>
> >
> > RISC-V already has a similar mechanism cpuid_to_hartid_map. Logical
> > cpu ids are continuous
> > while hartid can be sparse.
> >
> > The issue here is that __cpu_up_stack/task_pointer are per hart but
> > array size depends on the NR_CPUs
> > which represents the logical CPU.
> >
> > That's why, having a maximum number of hartids defined in config will
> > be helpful.
>
> I don't understand why we'd have both: if we can't find a CPU number for
> a hart, then all we can do is just leave it offline.  Wouldn't it be
> simpler to just rely on NR_CPUS?  We'll need to fix the crashes on
> overflows either way.,

For HSM ops, we can easily fix this limitation because the HART
start call has an opaque parameter which can be used to specify TP
and SP for the HART being brought up.

For spinwait ops, I don't see much value in fixing sparse hartid
problems so let's document this problem and have appropriate
checks in spinwait ops for out-of-bound array checks.

Regards,
Anup



More information about the linux-riscv mailing list