brocken devfreq simple_ondemand for Odroid XU3/4?
Krzysztof Kozlowski
krzk at kernel.org
Wed Jun 24 08:06:51 EDT 2020
On Wed, Jun 24, 2020 at 01:18:42PM +0200, Kamil Konieczny wrote:
> Hi,
>
> On 24.06.2020 12:32, Lukasz Luba wrote:
> > Hi Krzysztof and Willy
> >
> > On 6/23/20 8:11 PM, Krzysztof Kozlowski wrote:
> >> On Tue, Jun 23, 2020 at 09:02:38PM +0200, Krzysztof Kozlowski wrote:
> >>> On Tue, 23 Jun 2020 at 18:47, Willy Wolff <willy.mh.wolff.ml at gmail.com> wrote:
> >>>>
> >>>> Hi everybody,
> >>>>
> >>>> Is DVFS for memory bus really working on Odroid XU3/4 board?
> >>>> Using a simple microbenchmark that is doing only memory accesses, memory DVFS
> >>>> seems to not working properly:
> >>>>
> >>>> The microbenchmark is doing pointer chasing by following index in an array.
> >>>> Indices in the array are set to follow a random pattern (cutting prefetcher),
> >>>> and forcing RAM access.
> >>>>
> >>>> git clone https://protect2.fireeye.com/url?k=c364e88a-9eb6fe2f-c36563c5-0cc47a31bee8-631885f0a63a11a0&q=1&u=https%3A%2F%2Fgithub.com%2Fwwilly%2Fbenchmark.git \
> >>>> && cd benchmark \
> >>>> && source env.sh \
> >>>> && ./bench_build.sh \
> >>>> && bash source/scripts/test_dvfs_mem.sh
> >>>>
> >>>> Python 3, cmake and sudo rights are required.
> >>>>
> >>>> Results:
> >>>> DVFS CPU with performance governor
> >>>> mem_gov = simple_ondemand at 165000000 Hz in idle, should be bumped when the
> >>>> benchmark is running.
> >>>> - on the LITTLE cluster it takes 4.74308 s to run (683.004 c per memory access),
> >>>> - on the big cluster it takes 4.76556 s to run (980.343 c per moemory access).
> >>>>
> >>>> While forcing DVFS memory bus to use performance governor,
> >>>> mem_gov = performance at 825000000 Hz in idle,
> >>>> - on the LITTLE cluster it takes 1.1451 s to run (164.894 c per memory access),
> >>>> - on the big cluster it takes 1.18448 s to run (243.664 c per memory access).
> >>>>
> >>>> The kernel used is the last 5.7.5 stable with default exynos_defconfig.
> >>>
> >>> Thanks for the report. Few thoughts:
> >>> 1. What trans_stat are saying? Except DMC driver you can also check
> >>> all other devfreq devices (e.g. wcore) - maybe the devfreq events
> >>> (nocp) are not properly assigned?
> >>> 2. Try running the measurement for ~1 minutes or longer. The counters
> >>> might have some delay (which would require probably fixing but the
> >>> point is to narrow the problem).
> >>> 3. What do you understand by "mem_gov"? Which device is it?
> >>
> >> +Cc Lukasz who was working on this.
> >
> > Thanks Krzysztof for adding me here.
> >
> >>
> >> I just run memtester and more-or-less ondemand works (at least ramps
> >> up):
> >>
> >> Before:
> >> /sys/class/devfreq/10c20000.memory-controller$ cat trans_stat
> >> From : To
> >> : 165000000 206000000 275000000 413000000 543000000 633000000 728000000 825000000 time(ms)
> >> * 165000000: 0 0 0 0 0 0 0 0 1795950
> >> 206000000: 1 0 0 0 0 0 0 0 4770
> >> 275000000: 0 1 0 0 0 0 0 0 15540
> >> 413000000: 0 0 1 0 0 0 0 0 20780
> >> 543000000: 0 0 0 1 0 0 0 1 10760
> >> 633000000: 0 0 0 0 2 0 0 0 10310
> >> 728000000: 0 0 0 0 0 0 0 0 0
> >> 825000000: 0 0 0 0 0 2 0 0 25920
> >> Total transition : 9
> >>
> >>
> >> $ sudo memtester 1G
> >>
> >> During memtester:
> >> /sys/class/devfreq/10c20000.memory-controller$ cat trans_stat
> >> From : To
> >> : 165000000 206000000 275000000 413000000 543000000 633000000 728000000 825000000 time(ms)
> >> 165000000: 0 0 0 0 0 0 0 1 1801490
> >> 206000000: 1 0 0 0 0 0 0 0 4770
> >> 275000000: 0 1 0 0 0 0 0 0 15540
> >> 413000000: 0 0 1 0 0 0 0 0 20780
> >> 543000000: 0 0 0 1 0 0 0 2 11090
> >> 633000000: 0 0 0 0 3 0 0 0 17210
> >> 728000000: 0 0 0 0 0 0 0 0 0
> >> * 825000000: 0 0 0 0 0 3 0 0 169020
> >> Total transition : 13
> >>
> >> However after killing memtester it stays at 633 MHz for very long time
> >> and does not slow down. This is indeed weird...
> >
> > I had issues with devfreq governor which wasn't called by devfreq
> > workqueue. The old DELAYED vs DEFERRED work discussions and my patches
> > for it [1]. If the CPU which scheduled the next work went idle, the
> > devfreq workqueue will not be kicked and devfreq governor won't check
> > DMC status and will not decide to decrease the frequency based on low
> > busy_time.
> > The same applies for going up with the frequency. They both are
> > done by the governor but the workqueue must be scheduled periodically.
> >
> > I couldn't do much with this back then. I have given the example that
> > this is causing issues with the DMC [2]. There is also a description
> > of your situation staying at 633MHz for long time:
> > ' When it is missing opportunity
> > to change the frequency, it can either harm the performance or power
> > consumption, depending of the frequency the device stuck on.'
> >
> > The patches were not accepted because it will cause CPU wake-up from
> > idle, which increases the energy consumption. I know that there were
> > some other attempts, but I don't know the status.
> >
> > I had also this devfreq workqueue issue when I have been working on
> > thermal cooling for devfreq. The device status was not updated, because
> > the devfreq workqueue didn't check the device [3].
> >
> > Let me investigate if that is the case.
> >
> > Regards,
> > Lukasz
> >
> > [1] https%3A%2F%2Flkml.org%2Flkml%2F2019%2F2%2F11%2F1146
> > [2] https%3A%2F%2Flkml.org%2Flkml%2F2019%2F2%2F12%2F383
> > [3] https%3A%2F%2Flwn.net%2Fml%2Flinux-kernel%2F20200511111912.3001-11-lukasz.luba%40arm.com%2F
>
> and here was another try to fix wq: "PM / devfreq: add possibility for delayed work"
>
> https://lkml.org/lkml/2019/12/9/486
My case was clearly showing wrong behavior. System was idle but not
sleeping - network working, SSH connection ongoing. Therefore at least
one CPU was not idle and could adjust the devfreq/DMC... but this did not
happen. The system stayed for like a minute in 633 MHz OPP.
Not-waking up idle processors - ok... so why not using power efficient
workqueue? It is exactly for this purpose - wake up from time to time on
whatever CPU to do the necessary job.
Best regards,
Krzysztof
More information about the linux-arm-kernel
mailing list