sched/fair: Fix the update of blocked load when newly idle
Linux-MTD Mailing List
linux-mtd at lists.infradead.org
Mon May 14 05:59:08 PDT 2018
Gitweb: http://git.infradead.org/?p=mtd-2.6.git;a=commit;h=457be908c83637ee10bda085a23dc05afa3b14a0
Commit: 457be908c83637ee10bda085a23dc05afa3b14a0
Parent: 0b26351b910fb8fe6a056f8a1bbccabe50c0e19f
Author: Vincent Guittot <vincent.guittot at linaro.org>
AuthorDate: Thu Apr 26 12:19:32 2018 +0200
Committer: Ingo Molnar <mingo at kernel.org>
CommitDate: Thu May 3 07:38:03 2018 +0200
sched/fair: Fix the update of blocked load when newly idle
With commit:
31e77c93e432 ("sched/fair: Update blocked load when newly idle")
... we release the rq->lock when updating blocked load of idle CPUs.
This opens a time window during which another CPU can add a task to this
CPU's cfs_rq.
The check for newly added task of idle_balance() is not in the common path.
Move the out label to include this check.
Reported-by: Heiner Kallweit <hkallweit1 at gmail.com>
Tested-by: Geert Uytterhoeven <geert+renesas at glider.be>
Signed-off-by: Vincent Guittot <vincent.guittot at linaro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz at infradead.org>
Cc: Linus Torvalds <torvalds at linux-foundation.org>
Cc: Paul E. McKenney <paulmck at linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz at infradead.org>
Cc: Thomas Gleixner <tglx at linutronix.de>
Fixes: 31e77c93e432 ("sched/fair: Update blocked load when newly idle")
Link: http://lkml.kernel.org/r/20180426103133.GA6953@linaro.org
Signed-off-by: Ingo Molnar <mingo at kernel.org>
---
kernel/sched/fair.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 54dc31e7ab9b..e3002e5ada31 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9847,6 +9847,7 @@ static int idle_balance(struct rq *this_rq, struct rq_flags *rf)
if (curr_cost > this_rq->max_idle_balance_cost)
this_rq->max_idle_balance_cost = curr_cost;
+out:
/*
* While browsing the domains, we released the rq lock, a task could
* have been enqueued in the meantime. Since we're not going idle,
@@ -9855,7 +9856,6 @@ static int idle_balance(struct rq *this_rq, struct rq_flags *rf)
if (this_rq->cfs.h_nr_running && !pulled_task)
pulled_task = 1;
-out:
/* Move the next balance forward */
if (time_after(this_rq->next_balance, next_balance))
this_rq->next_balance = next_balance;
More information about the linux-mtd-cvs
mailing list