[PATCH 1/1] lib: sbi: relax scoldboot_lock spinlock

Heinrich Schuchardt xypron.glpk at gmx.de
Thu Aug 13 14:03:32 EDT 2020


On 13.08.20 19:03, Jessica Clarke wrote:
> On 13 Aug 2020, at 17:51, Heinrich Schuchardt <xypron.glpk at gmx.de> wrote:
>> On 13.08.20 18:36, Jessica Clarke wrote:
>>> On 13 Aug 2020, at 17:24, Heinrich Schuchardt <xypron.glpk at gmx.de> wrote:
>>>>
>>>> OpenSBI hangs in wake_coldboot_harts() indefinitely on the MaixDuino board
>>>> when trying to boot U-Boot as FW_PAYLOAD.
>>>
>>> This smells of there being a more fundamental issue that's not being
>>> addressed which this manages to work around, either reliably or by
>>> chance. What is the actual deadlock condition you observe?
>>
>> There are only two function using coldboot_lock spinlock:
>>
>> * wake_coldboot_harts()
>> * wait_for_coldboot()
>>
>> wake_coldboot_harts() was entered but never left without patching one of
>> the functions.
>>
>> The Kendryte K210 has two harts. One goes the cold boot path while the
>> other is sent to the warm boot path.
>
> The K210's platform definition uses clint_ipi_send, which as far as I
> can tell is entirely non-blocking. The only thing wake_coldboot_harts
> does with the lock held is set coldboot_done to 1 and send an IPI to
> all the other harts. Thus wake_coldboot_harts should never be able to
> block holding the spin lock, meaning wait_for_coldboot should
> eventually wake up and be able to acquire the lock (on platforms that
> implement WFI fully it shouldn't have to wait long for the lock, though
> on platforms that make it a NOP there might be a bit of .
>
> Please further investigate (or explain if you already know) exactly how
> you end up in a deadlock situation between wake_coldboot_harts and
> wait_for_coldboot, as I cannot currently see how it could ever be
> possible.
>
>>>> Even if reading and writing coldboot_done is not atomic it is safe to check
>>>> it without using a spinlock.
>>>
>>> Maybe microarchitecturally on the specific hardware in question (though
>>> I wouldn't even be so certain about that), but definitely not in C. You
>>> now have a data race in reading coldboot_done, which isn't even
>>> volatile^, so a sufficiently smart compiler would be completely within
>>> its right to do whatever it wants to the code, including things like
>>> delete the loop entirely (or, slightly less dramatic, turn it into an
>>> if). This patch introduces undefined behaviour.

The cold boot path prints out over the serial console so it is the slow
path. When it arrives at wake_coldboot_harts() the spinlock is already
held by wait_for_coldboot().

You have two loops running at the same time:

A loop in wait_for_coldboot() which switches the lock on and off
repeatedly and a second loop in wake_coldboot_harts' spin_lock().

The warm hart releases the spin lock. This is detected by the cold hart
in spin_lock_check(). In the meantime the warm hart locks the spinlock
again and the cold hart fails in amoswap.w. This can go forever.

It is only by different relative speed of the loops if other processor
got out of this situation.

A loop as in wait_for_coldboot() reaquiring a lock immediately without
giving sufficient time for other threads to acquire the lock is simply
incorrect coding.

The following change also resolves the problem:

diff --git a/lib/sbi/sbi_init.c b/lib/sbi/sbi_init.c
index a7fb848..68be58f 100644
--- a/lib/sbi/sbi_init.c
+++ b/lib/sbi/sbi_init.c
@@ -132,12 +132,12 @@ static void wake_coldboot_harts(struct sbi_scratch
*scratch, u32 hartid)
 {
        const struct sbi_platform *plat = sbi_platform_ptr(scratch);

-       /* Acquire coldboot lock */
-       spin_lock(&coldboot_lock);
-
        /* Mark coldboot done */
        coldboot_done = 1;

+       /* Acquire coldboot lock */
+       spin_lock(&coldboot_lock);
+
        /* Send an IPI to all HARTs waiting for coldboot */
        for (int i = 0; i <= sbi_scratch_last_hartid(); i++) {
                if ((i != hartid) &&

Best regards

Heinrich

>>>
>>> Jess
>>>
>>> ^ Please do not make it volatile as a workaround, that is not the right
>>>  way to "fix" such issues.
>>
>> What do you suggest Jessica?
>
> My primary suggestion is above, since this does not look like a
> necessary fix.
>
> There is the separate question of whether this code should be using
> spin locks anyway. One could optimise it with suitable use of proper
> atomics to access coldboot_done, which would mask whatever issue you
> are seeing, but that should not be necessary, and in early boot code
> like this that's not on a hot code path it's better to keep things
> simple like it is currently rather than risk writing unsafe code
> through incorrect use of atomics. I would feel much more comfortable
> leaving the code using spin locks.
>
> Jess
>
>>>> Signed-off-by: Heinrich Schuchardt <xypron.glpk at gmx.de>
>>>> ---
>>>> lib/sbi/sbi_init.c | 4 ++--
>>>> 1 file changed, 2 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/lib/sbi/sbi_init.c b/lib/sbi/sbi_init.c
>>>> index a7fb848..a0e4f11 100644
>>>> --- a/lib/sbi/sbi_init.c
>>>> +++ b/lib/sbi/sbi_init.c
>>>> @@ -106,14 +106,14 @@ static void wait_for_coldboot(struct sbi_scratch *scratch, u32 hartid)
>>>> 	sbi_hartmask_set_hart(hartid, &coldboot_wait_hmask);
>>>>
>>>> 	/* Wait for coldboot to finish using WFI */
>>>> +	spin_unlock(&coldboot_lock);
>>>> 	while (!coldboot_done) {
>>>> -		spin_unlock(&coldboot_lock);
>>>> 		do {
>>>> 			wfi();
>>>> 			cmip = csr_read(CSR_MIP);
>>>> 		 } while (!(cmip & MIP_MSIP));
>>>> -		spin_lock(&coldboot_lock);
>>>> 	};
>>>> +	spin_lock(&coldboot_lock);
>>>>
>>>> 	/* Unmark current HART as waiting */
>>>> 	sbi_hartmask_clear_hart(hartid, &coldboot_wait_hmask);
>>>> --
>>>> 2.28.0
>>>>
>>>>
>>>> --
>>>> opensbi mailing list
>>>> opensbi at lists.infradead.org
>>>> http://lists.infradead.org/mailman/listinfo/opensbi
>




More information about the opensbi mailing list