[PATCH 1/4] sched/fair: Remove SIS_AVG_CPU
Mel Gorman
mgorman at techsingularity.net
Tue Dec 8 05:59:00 EST 2020
On Tue, Dec 08, 2020 at 11:07:19AM +0100, Dietmar Eggemann wrote:
> On 07/12/2020 10:15, Mel Gorman wrote:
> > SIS_AVG_CPU was introduced as a means of avoiding a search when the
> > average search cost indicated that the search would likely fail. It
> > was a blunt instrument and disabled by 4c77b18cf8b7 ("sched/fair: Make
> > select_idle_cpu() more aggressive") and later replaced with a proportional
> > search depth by 1ad3aaf3fcd2 ("sched/core: Implement new approach to
> > scale select_idle_cpu()").
> >
> > While there are corner cases where SIS_AVG_CPU is better, it has now been
> > disabled for almost three years. As the intent of SIS_PROP is to reduce
> > the time complexity of select_idle_cpu(), lets drop SIS_AVG_CPU and focus
> > on SIS_PROP as a throttling mechanism.
> >
> > Signed-off-by: Mel Gorman <mgorman at techsingularity.net>
> > ---
> > kernel/sched/fair.c | 3 ---
> > kernel/sched/features.h | 1 -
> > 2 files changed, 4 deletions(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 98075f9ea9a8..23934dbac635 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -6161,9 +6161,6 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
> > avg_idle = this_rq()->avg_idle / 512;
> > avg_cost = this_sd->avg_scan_cost + 1;
> >
> > - if (sched_feat(SIS_AVG_CPU) && avg_idle < avg_cost)
> > - return -1;
> > -
> > if (sched_feat(SIS_PROP)) {
> > u64 span_avg = sd->span_weight * avg_idle;
> > if (span_avg > 4*avg_cost)
>
> Nitpick:
>
> Since now avg_cost and avg_idle are only used w/ SIS_PROP, they could go
> completely into the SIS_PROP if condition.
>
Yeah, I can do that. In the initial prototype, that happened in a
separate patch that split out SIS_PROP into a helper function and I
never merged it back. It's a trivial change.
Thanks.
--
Mel Gorman
SUSE Labs
More information about the linux-arm-kernel
mailing list