[RFC PATCH 2/8] Documentation: arm: define DT cpu capacity bindings

Mark Rutland mark.rutland at arm.com
Tue Dec 15 09:28:37 PST 2015


On Tue, Dec 15, 2015 at 05:17:13PM +0000, Mark Brown wrote:
> On Tue, Dec 15, 2015 at 03:32:19PM +0000, Mark Rutland wrote:
> > On Tue, Dec 15, 2015 at 03:08:13PM +0000, Mark Brown wrote:
> > > On Tue, Dec 15, 2015 at 02:01:36PM +0000, Mark Rutland wrote:
> 
> > > > I really don't want to see a table of magic numbers in the kernel.
> 
> > > Right, there's pitfalls there too although not being part of an ABI
> > > does make them more manageable.  
> 
> > I think that people are very likely to treat them exactly like an ABI,
> > w.r.t. any regressions in performance that result from their addition,
> > modification, or removal. That becomes really horrible when new CPUs
> > appear.
> 
> Obviously people are going to get upset if we introduce performance
> regressions - but that's true always, we can also introduce problems
> with numbers people have put in DT.  It seems like it'd be harder to
> manage regressions due to externally provided magic numbers since
> there's inherently less information there.

It's certainly still possible to have regressions in that case. Those
regressions would be due to code changes in the kernel, given the DT
didn't change.

I'm not sure I follow w.r.t. "inherently less information", unless you
mean trying to debug without access to that DTB?

> > > One thing it's probably helpful to establish here is how much the
> > > specific numbers are going to matter in the grand scheme of things.  If
> > > the specific numbers *are* super important then nobody is going to want
> > > to touch them as they'll be prone to getting tweaked.  If instead the
> > > numbers just need to be ballpark accurate so the scheduler starts off in
> > > roughly the right place and the specific numbers don't matter it's a lot
> > > easier and having a table in the kernel until we think of something
> > > better (if that ever happens) gets a lot easier.
> 
> > I agree that we first need to figure out the importance of these
> > numbers. I disagree that our first step should be to add a table.
> 
> My point there is that if we're not that concerned about the specific
> number something in kernel is safer.

I don't entirely disagree there. I think an in-kernel benchmark is
likely safer.

> > > My expectation is that we just need good enough, not perfect, and that
> > > seems to match what Juri is saying about the expectation that most of
> > > the fine tuning is done via other knobs.
> 
> > My expectation is that if a ballpark figure is good enough, it should be
> > possible to implement something trivial like bogomips / loop_per_jiffy
> > calculation.
> 
> That does have the issue that we need to scale with regard to the
> frequency the benchmark gets run at.  That's not an insurmountable
> obstacle but it's not completely trivial either.

If we change clock frequency, then regardless of where the information
comes from we need to perform scaling, no?

One nice thing about doing a benchmark to derive the numbers is that
when the kernel is that when the frequency is fixed but the kernel
cannot query it, the numbers will be representative.

Mark.



More information about the linux-arm-kernel mailing list