How to get better precision out of getrusage on the ARM?
Patrick Doyle
wpdster at gmail.com
Wed Dec 30 07:00:46 PST 2015
Continuing on...
I now have a CLOCKSOURCE_OF_DECLARED()'ed 10 MHz clock source running
on my ARM processor (Atmel SAMA5D2 Xplained board). It registers
itself through sched_clock_register() to provide a high resolution
sched clock. Once I turned on "Full dynticks CPU time accounting"
(CONFIG_VIRT_CPU_ACCOUNTING_GEN), I was able to get better than jiffy
resolution from my calls to getrusage(RUSAGE_THREAD,..). But things
still aren't quite right. I am using getrusage() to provide some
runtime profile information to an existing application (that was
ported to run on Linux instead of a custom RTOS). I have code that
looks like:
tick()
// commented out code that used to do something
tock()
where tick() & tock() are my profile "start" and "stop" points that
call getrusage() to record and and accumulate time spent between calls
to tick() & tock(). Most of the time, I get a delta of 0 between the
two calls, which I expect. But occasionally, I get a delta ranging
between 800us and 1000us, which I don't understand at all. It seems
like my thread is being "charged" for time spent doing something else.
Perhaps an interrupt occurred and its time got charged to my thread;
perhaps a higher priority thread ran for 1ms, I don't know (yet).
Does anybody have any suggestions as to where I might look, or as to
what kernel CONFIG options might make the most sense for an
application such as this?
--wpd
More information about the linux-arm-kernel
mailing list