[RFC PATCH 2/8] Documentation: arm: define DT cpu capacity bindings
Vincent Guittot
vincent.guittot at linaro.org
Tue Dec 15 09:47:20 PST 2015
On 15 December 2015 at 18:15, Mark Rutland <mark.rutland at arm.com> wrote:
> On Tue, Dec 15, 2015 at 05:59:34PM +0100, Vincent Guittot wrote:
>> On 15 December 2015 at 17:41, Mark Rutland <mark.rutland at arm.com> wrote:
>> > On Tue, Dec 15, 2015 at 04:23:18PM +0000, Catalin Marinas wrote:
>> >> On Tue, Dec 15, 2015 at 03:57:37PM +0000, Mark Rutland wrote:
>> >> > On Tue, Dec 15, 2015 at 03:46:51PM +0000, Juri Lelli wrote:
>> >> > > On 15/12/15 15:32, Mark Rutland wrote:
>> >> > > > On Tue, Dec 15, 2015 at 03:08:13PM +0000, Mark Brown wrote:
>> >> > > > > My expectation is that we just need good enough, not perfect, and that
>> >> > > > > seems to match what Juri is saying about the expectation that most of
>> >> > > > > the fine tuning is done via other knobs.
>> >> > > >
>> >> > > > My expectation is that if a ballpark figure is good enough, it should be
>> >> > > > possible to implement something trivial like bogomips / loop_per_jiffy
>> >> > > > calculation.
>> >> > >
>> >> > > I didn't really followed that, so I might be wrong here, but isn't
>> >> > > already happened a discussion about how we want/like to stop exposing
>> >> > > bogomips info or rely on it for anything but in kernel delay loops?
>> >> >
>> >> > I meant that we could have a benchmark of that level of complexity,
>> >> > rather than those specific values.
>> >>
>> >> Or we could simply let user space use whatever benchmarks or hard-coded
>> >> values it wants and set the capacity via sysfs (during boot). By
>> >> default, the kernel would assume all CPUs equal.
>> >
>> > I assume that a userspace override would be available regardless of
>> > whatever mechanism the kernel uses to determine relative
>> > performance/effinciency.
>>
>> Don't you think that if we let a complete latitude to the userspace
>> to set whatever it wants, it will be used to abuse the kernel (and the
>> scheduler in particular ) and that this will finish in a real mess to
>> understand what is wrong when a task is not placed where it should be.
>
> I'm not sure I follow what you mean by "abuse" here. Userspace currently
> can force the scheduler to make sub-optimal decisions in a number of
> ways, e.g.
>
> * Hot-unplugging the preferred CPUs
> * Changing a task's affinity mask
> * Setting the nice value of a task
> * Using rlimits and/or cgroups
> * Using a cpufreq governor
> * Fork-bombing
All these are parameters have a meaning (except the last one). By
abusing i mean setting the capacity of the most powerful cpu to 1 for
no good reason except to abuse the scheduler so the latter will not
put that much tasks on it just because the current running use case
is more efficient if the big core is not used.
>
> Practically all of these are prvileged operations. I would envisage the
> userspace interface for "capacity" management to be similar.
>
>> We can probably provide a debug mode to help soc manufacturer to
>> define their capacity value but IMHO we should not let complete
>> latitude in normal operation
>
> In normal operation userspace wouldn't mess with this, as with most of
> the cases above. Userspace can already shooti tself in the foot.
>
>> In normal operation we need to give some methods to tweak the value to
>> reflect a memory bounded or integer calculation work or other kind of
>> work that currently runs on the cpu but not more
>
> You can already do that with the mechanisms above, to some extent. I'm
> not sure I follow.
>
> Mark.
More information about the linux-arm-kernel
mailing list