[PATCH] sched: Fix rq->nr_iowait ordering
Mel Gorman
mgorman at techsingularity.net
Tue Nov 17 06:43:18 EST 2020
On Tue, Nov 17, 2020 at 10:38:29AM +0100, Peter Zijlstra wrote:
> Subject: sched: Fix rq->nr_iowait ordering
> From: Peter Zijlstra <peterz at infradead.org>
> Date: Thu, 24 Sep 2020 13:50:42 +0200
>
> schedule() ttwu()
> deactivate_task(); if (p->on_rq && ...) // false
> atomic_dec(&task_rq(p)->nr_iowait);
> if (prev->in_iowait)
> atomic_inc(&rq->nr_iowait);
>
> Allows nr_iowait to be decremented before it gets incremented,
> resulting in more dodgy IO-wait numbers than usual.
>
> Note that because we can now do ttwu_queue_wakelist() before
> p->on_cpu==0, we lose the natural ordering and have to further delay
> the decrement.
>
> Fixes: Fixes: c6e7bd7afaeb ("sched/core: Optimize ttwu() spinning on p->on_cpu")
> Reported-by: Tejun Heo <tj at kernel.org>
> Signed-off-by: Peter Zijlstra (Intel) <peterz at infradead.org>
s/Fixes: Fixes:/Fixes:/
Ok, very minor hazard that the same logic gets duplicated that someone
might try "fix" but git blame should help. Otherwise, it makes sense as
I've received more than one "bug" that complained that a number was larger
than they expected even if no other problem was present so
Acked-by: Mel Gorman <mgorman at techsingularity.net>
--
Mel Gorman
SUSE Labs
More information about the linux-arm-kernel
mailing list