Guarantee udelay(N) spins at least N microseconds

Russell King - ARM Linux linux at arm.linux.org.uk
Sat Apr 11 00:30:22 PDT 2015


On Fri, Apr 10, 2015 at 11:22:56PM +0200, Mason wrote:
> On 10/04/2015 22:42, Russell King - ARM Linux wrote:
> > On Fri, Apr 10, 2015 at 10:01:35PM +0200, Mason wrote:
> >> There is, however, an important difference between loop-based
> >> delays and timer-based delays; CPU frequencies typically fall
> >> in the 50-5000 MHz range, while timer frequencies typically
> >> span tens of kHz up to hundreds of MHz. For example, 90 kHz
> >> is sometimes provided in multimedia systems (MPEG TS).
> > 
> > Why would you want to use such a slowly clocked counter for something
> > which is supposed to be able to produce delays in the micro-second and
> > potentially the nanosecond range?
> > 
> > get_cycles(), which is what the timer based delay is based upon, is
> > supposed to be a _high resolution counter_, preferably running at
> > the same kind of speeds as the CPU, though with a fixed clock rate.
> > It most definitely is not supposed to be in the kHz range.
> 
> If there's only a single fixed clock in the system, I'd
> use it for sched_clock, clocksource, and timer delay.
> Are there other options?
> 
> It was you who wrote some time ago: "Timers are preferred
> because of the problems with the software delay loop."
> (My system implements DVFS.)
> 
> It seems to me that a 90 kHz timer is still better than
> the jiffy counter, or am I mistaken again?

Given the choice of a 90kHz timer vs using a calibrated software delay
loop, the software delay loop wins.  I never envisioned that someone
would be silly enough to think that a 90kHz timer would somehow be
suitable to replace a software delay loop calibrated against a timer.

-- 
FTTC broadband for 0.8mile line: currently at 10.5Mbps down 400kbps up
according to speedtest.net.



More information about the linux-arm-kernel mailing list