[PATCH] sched: support dynamiQ cluster

Joel Fernandes (Google) joel.opensrc at gmail.com
Fri Apr 13 13:12:50 PDT 2018


On Fri, Apr 6, 2018 at 5:58 AM, Morten Rasmussen
<morten.rasmussen at arm.com> wrote:
> On Thu, Apr 05, 2018 at 06:22:48PM +0200, Vincent Guittot wrote:
>> Hi Morten,
>>
>> On 5 April 2018 at 17:46, Morten Rasmussen <morten.rasmussen at arm.com> wrote:
>> > On Wed, Apr 04, 2018 at 03:43:17PM +0200, Vincent Guittot wrote:
>> >> On 4 April 2018 at 12:44, Valentin Schneider <valentin.schneider at arm.com> wrote:
>> >> > Hi,
>> >> >
>> >> > On 03/04/18 13:17, Vincent Guittot wrote:
>> >> >> Hi Valentin,
>> >> >>
>> >> > [...]
>> >> >>>
>> >> >>> I believe ASYM_PACKING behaves better here because the workload is only
>> >> >>> sysbench threads. As stated above, since task utilization is disregarded, I
>> >> >>
>> >> >> It behaves better because it doesn't wait for the task's utilization
>> >> >> to reach a level before assuming the task needs high compute capacity.
>> >> >> The utilization gives an idea of the running time of the task not the
>> >> >> performance level that is needed
>> >> >>
>> >> >
>> >> > [
>> >> > That's my point actually. ASYM_PACKING disregards utilization and moves those
>> >> > threads to the big cores ASAP, which is good here because it's just sysbench
>> >> > threads.
>> >> >
>> >> > What I meant was that if the task composition changes, IOW we mix "small"
>> >> > tasks (e.g. periodic stuff) and "big" tasks (performance-sensitive stuff like
>> >> > sysbench threads), we shouldn't assume all of those require to run on a big
>> >> > CPU. The thing is, ASYM_PACKING can't make the difference between those, so
>> > [Morten]
>> >>
>> >> That's the 1st point where I tend to disagree: why big cores are only
>> >> for long running task and periodic stuff can't need to run on big
>> >> cores to get max compute capacity ?
>> >> You make the assumption that only long running tasks need high compute
>> >> capacity. This patch wants to always provide max compute capacity to
>> >> the system and not only long running task
>> >
>> > There is no way we can tell if a periodic or short-running tasks
>> > requires the compute capacity of a big core or not based on utilization
>> > alone. The utilization can only tell us if a task could potentially use
>> > more compute capacity, i.e. the utilization approaches the compute
>> > capacity of its current cpu.
>> >
>> > How we handle low utilization tasks comes down to how we define
>> > "performance" and if we care about the cost of "performance" (e.g.
>> > energy consumption).
>> >
>> > Placing a low utilization task on a little cpu should always be fine
>> > from _throughput_ point of view. As long as the cpu has spare cycles it
>>
>> [Vincent]
>> I disagree, throughput is not only a matter of spare cycle it's also a
>> matter of how fast you compute the work like with IO activity as an
>> example
>
> [Morten]
> From a cpu centric point of view it is, but I agree that from a
> application/user point of view completion time might impact throughput
> too. For example of if your throughput depends on how fast you can
> offload work to some peripheral device (GPU for example).
>
> However, as I said in the beginning we don't know what the task does.

[Joel]
Just wanted to say about Vincent point of IO loads throughput -
remembering from when I was playing with the iowait boost stuff, that
- say you have a little task that does some IO and blocks and does so
periodically. In the scenario the task will run for little time and is
a little task by way of looking at utilization. However, if we were to
run it on the BIG CPUs, the overall throughput of the I/O activity
would be higher.

For this case, it seems its impossible to specify the "default"
behavior correctly. Like, do we care about performance or energy more?
This seems more like a policy-decision from userspace and not
something the scheduler should necessarily have to decide. Like if I/O
activity is background and not affecting the user experience.

thanks,

- Joel



More information about the linux-arm-kernel mailing list