[PATCH 1/1] lib: sbi: relax scoldboot_lock spinlock

Anup Patel Anup.Patel at wdc.com
Fri Aug 14 05:20:28 EDT 2020



> -----Original Message-----
> From: Heinrich Schuchardt <xypron.glpk at gmx.de>
> Sent: 14 August 2020 14:46
> To: Anup Patel <Anup.Patel at wdc.com>; Damien Le Moal
> <Damien.LeMoal at wdc.com>; atishp at atishpatra.org; seanga2 at gmail.com
> Cc: Atish Patra <Atish.Patra at wdc.com>; bmeng.cn at gmail.com;
> jrtc27 at jrtc27.com; opensbi at lists.infradead.org; merle at hardenedlinux.org
> Subject: Re: [PATCH 1/1] lib: sbi: relax scoldboot_lock spinlock
> 
> On 14.08.20 09:03, Anup Patel wrote:
> >
> >
> >> -----Original Message-----
> >> From: Damien Le Moal <Damien.LeMoal at wdc.com>
> >> Sent: 14 August 2020 11:49
> >> To: Anup Patel <Anup.Patel at wdc.com>; atishp at atishpatra.org;
> >> xypron.glpk at gmx.de; seanga2 at gmail.com
> >> Cc: jrtc27 at jrtc27.com; Atish Patra <Atish.Patra at wdc.com>;
> >> bmeng.cn at gmail.com; opensbi at lists.infradead.org;
> >> merle at hardenedlinux.org
> >> Subject: Re: [PATCH 1/1] lib: sbi: relax scoldboot_lock spinlock
> >>
> >> On 2020/08/14 15:08, Anup Patel wrote:
> >>>> The following works for me. However, the sbi_ecall_console_puts()
> >>>> call in
> >>>> test_main() for the test FW_PAYLOAD does not work. Individual calls
> >>>> to
> >>>> sbi_ecall_console_putc() do work, but not the full string. It looks
> >>>> like the string is broken, so the data section may be broken...
> >>>> Digging into
> >> that.
> >>>> diff --git a/lib/sbi/sbi_init.c b/lib/sbi/sbi_init.c index
> >>>> a7fb848..9692cd9 100644
> >>>> --- a/lib/sbi/sbi_init.c
> >>>> +++ b/lib/sbi/sbi_init.c
> >>>> @@ -84,69 +84,18 @@ static void sbi_boot_prints(struct sbi_scratch
> >>>> *scratch, u32 hartid)
> >>>>         sbi_hart_pmp_dump(scratch);  }
> >>>>
> >>>> -static spinlock_t coldboot_lock = SPIN_LOCK_INITIALIZER; -static
> >>>> unsigned long coldboot_done = 0; -static struct sbi_hartmask
> >>>> coldboot_wait_hmask = {
> >>>> 0 };
> >>>> +static atomic_t coldboot_done = ATOMIC_INITIALIZER(0);
> >>>>
> >>>> -static void wait_for_coldboot(struct sbi_scratch *scratch, u32
> >>>> hartid)
> >>>> +static inline void wait_for_coldboot(struct sbi_scratch *scratch,
> >>>> +u32
> >>>> hartid)
> >>>>  {
> >>>> -       unsigned long saved_mie, cmip;
> >>>> -       const struct sbi_platform *plat = sbi_platform_ptr(scratch);
> >>>> -
> >>>> -       /* Save MIE CSR */
> >>>> -       saved_mie = csr_read(CSR_MIE);
> >>>> -
> >>>> -       /* Set MSIE bit to receive IPI */
> >>>> -       csr_set(CSR_MIE, MIP_MSIP);
> >>>> -
> >>>> -       /* Acquire coldboot lock */
> >>>> -       spin_lock(&coldboot_lock);
> >>>> -
> >>>> -       /* Mark current HART as waiting */
> >>>> -       sbi_hartmask_set_hart(hartid, &coldboot_wait_hmask);
> >>>> -
> >>>> -       /* Wait for coldboot to finish using WFI */
> >>>> -       while (!coldboot_done) {
> >>>> -               spin_unlock(&coldboot_lock);
> >>>> -               do {
> >>>> -                       wfi();
> >>>> -                       cmip = csr_read(CSR_MIP);
> >>>> -                } while (!(cmip & MIP_MSIP));
> >>>> -               spin_lock(&coldboot_lock);
> >>>> -       };
> >>>> -
> >>>> -       /* Unmark current HART as waiting */
> >>>> -       sbi_hartmask_clear_hart(hartid, &coldboot_wait_hmask);
> >>>> -
> >>>> -       /* Release coldboot lock */
> >>>> -       spin_unlock(&coldboot_lock);
> >>>> -
> >>>> -       /* Restore MIE CSR */
> >>>> -       csr_write(CSR_MIE, saved_mie);
> >>>> -
> >>>> -       /* Clear current HART IPI */
> >>>> -       sbi_platform_ipi_clear(plat, hartid);
> >>>> +       /* Wait for coldboot to finish */
> >>>> +       while (!atomic_read(&coldboot_done)) {};
> 
> 
> Atomicity of accesses to coldboot_done is not needed for the cold hart to
> signal that it has proceeded far enough to let the warm harts continue.
> 
> As Jessica pointed out in the inter-thread communication we have to make
> sure that the compiler optimization does not re-sequence commands
> incorrectly. A good description can be found in
> 
>     https://www.kernel.org/doc/Documentation/memory-barriers.txt
> 
> The Linux kernel defines
> 
>     #define barrier() __asm__ __volatile__("": : :"memory")
> 
> as a compiler barrier.
> 
> I think we can completely remove the 'coldboot_lock' spin-lock and use
> barrier() instead.

coldboot_lock protects the coldboot_wait_hmask. We should keep the
Coldboot_lock for this bitmask and make coldboot_done atomic.

The atomic_read() and atomic_write() are regular read/write with
RISC-V barriers.

Regards,
Anup

> 
> Best regards
> 
> Heinrich
> 
> >>>
> >>> This is over-simplified and will slow down the booting in cold boot path.
> >>>
> >>> Imagine, boot CPU in cold boot path and all secondary CPUs
> >>> bombarding the interconnect with atomic operations.
> >>
> >> That was just a quick test. With this, the kendryte goes to S mode
> >> and start running the fw payload, which does not execute correctly
> >> still. So there is another problem somewhere with kendryte platform. Not
> sure what.
> >>
> >>>
> >>> It has to be a combination of atomic coldboot_done and WFI to avoid
> >>> unnecessary traffic in interconnect.
> >>>
> >>> We just need to make coldboot_done atomic and don't use
> >>> coldboot_lock for coldboot_done. Rest of the things should stay as-is.
> >>
> >> Without the spinlock, there will be problems with coldboot_wait_hmask
> >> if it is used as is. But I do not think that one is necessary. The
> >> boot hart can just send an IPI to all waiting harts. The waiting
> >> harts do not need to clear that mask.
> >
> > There is a contradiction in your statement here. How does boot hart
> > know which all HARTs are waiting for IPI ? In real world systems, we
> > can have faulty parts where certain CPUs never come-up so we can't
> > assume that all CPUs are waiting for WFI.
> >
> > The coldboot_wait_hmask is certainly required and if we use spinlock
> > just for coldboot_wait_hmask then it is fine because it won't be used
> > in a loop.
> >
> > Regards,
> > Anup
> >
> >>
> >>
> >> --
> >> Damien Le Moal
> >> Western Digital Research
> >



More information about the opensbi mailing list