[RFC] Improving udelay/ndelay on platforms where that is possible
Marc Gonzalez
marc_gonzalez at sigmadesigns.com
Thu Nov 16 08:26:32 PST 2017
On 16/11/2017 17:08, Nicolas Pitre wrote:
> On Thu, 16 Nov 2017, Marc Gonzalez wrote:
>
>> On 16/11/2017 16:36, Russell King - ARM Linux wrote:
>>> On Thu, Nov 16, 2017 at 04:26:51PM +0100, Marc Gonzalez wrote:
>>>> On 15/11/2017 14:13, Russell King - ARM Linux wrote:
>>>>
>>>>> udelay() needs to offer a consistent interface so that drivers know
>>>>> what to expect no matter what the implementation is. Making one
>>>>> implementation conform to your ideas while leaving the other
>>>>> implementations with other expectations is a recipe for bugs.
>>>>>
>>>>> If you really want to do this, fix the loops_per_jiffy implementation
>>>>> as well so that the consistency is maintained.
>>>>
>>>> Hello Russell,
>>>>
>>>> It seems to me that, when using DFS, there's a serious issue with loop-based
>>>> delays. (IIRC, it was you who pointed this out a few years ago.)
>>>>
>>>> If I'm reading arch/arm/kernel/smp.c correctly, loops_per_jiffy is scaled
>>>> when the frequency changes.
>>>>
>>>> But arch/arm/lib/delay-loop.S starts by loading the current value of
>>>> loops_per_jiffy, computes the number of times to loop, and then loops.
>>>> If the frequency increases when the core is in __loop_delay, the
>>>> delay will be much shorter than requested.
>>>>
>>>> Is this a correct assessment of the situation?
>>>
>>> Absolutely correct, and it's something that people are aware of, and
>>> have already catered for while writing their drivers.
>>
>> In their cpufreq driver?
>> In "real" device drivers that happen to use delays?
>>
>> On my system, the CPU frequency may ramp up from 120 MHz to 1.2 GHz.
>> If the frequency increases at the beginning of __loop_delay, udelay(100)
>> would spin only 10 microseconds. This is likely to cause issues in
>> any driver using udelay.
>>
>> How does one cater for that?
>
> You make sure your delays are based on a stable hardware timer.
> Most platforms nowadays should have a suitable timer source.
So you propose fixing loop-based delays by using clock-based delays,
is that correct? (That is indeed what I did on my platform.)
Russell stated that there are platforms using loop-based delays with
cpufreq enabled. I'm asking how they manage the brokenness.
Regards.
More information about the linux-arm-kernel
mailing list