[PATCH 04/10] sched/fair: Return an idle cpu if one is found after a failed search for an idle core

Mel Gorman mgorman at techsingularity.net
Thu Dec 3 12:50:34 EST 2020


On Thu, Dec 03, 2020 at 05:35:29PM +0100, Vincent Guittot wrote:
> > index fc48cc99b03d..845bc0cd9158 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -6066,6 +6066,7 @@ void __update_idle_core(struct rq *rq)
> >   */
> >  static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int target)
> >  {
> > +       int idle_candidate = -1;
> >         struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
> >         int core, cpu;
> >
> > @@ -6084,7 +6085,13 @@ static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int
> >                         schedstat_inc(this_rq()->sis_scanned);
> >                         if (!available_idle_cpu(cpu)) {
> >                                 idle = false;
> > -                               break;
> > +                               if (idle_candidate != -1)
> > +                                       break;
> 
> 
> If I get your changes correctly, it will now continue to loop on all
> cpus of the smt mask to try to find an idle cpu whereas it was

That was an oversight, the intent is that the SMT search breaks but
the search for an idle core continues. The patch was taken from a very
different series that unified all the select_idle_* functions as a single
function and I failed to fix it up properly. The unification series
didn't generate good results back 9 months ago when I tried and I never
finished it off. In the current context, it would not make sense to try
a unification again.

> With the change above you might end up looping all cpus of llc if
> there is only one idle cpu in the llc whereas before we were looping
> only 1 cpu per core at most. The bottom change makes sense but the
> above on is in some way replacing completely select_idle_cpu and
> bypass SIS_PROP and we should avoid that IMO
> 

You're right of course, it was never intended to behave like that.

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 0a3d338770c4..49b1590e60a9 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6084,8 +6084,7 @@ static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int
 		for_each_cpu(cpu, cpu_smt_mask(core)) {
 			if (!available_idle_cpu(cpu)) {
 				idle = false;
-				if (idle_candidate != -1)
-					break;
+				break;
 			}
 
 			if (idle_candidate == -1 &&

-- 
Mel Gorman
SUSE Labs



More information about the linux-arm-kernel mailing list