[RFC PATCH 0/4] Reduce worst-case scanning of runqueues in select_idle_sibling
vincent.guittot at linaro.org
Mon Dec 7 10:04:41 EST 2020
On Mon, 7 Dec 2020 at 10:15, Mel Gorman <mgorman at techsingularity.net> wrote:
> This is a minimal series to reduce the amount of runqueue scanning in
> select_idle_sibling in the worst case.
> Patch 1 removes SIS_AVG_CPU because it's unused.
> Patch 2 improves the hit rate of p->recent_used_cpu to reduce the amount
> of scanning. It should be relatively uncontroversial
> Patch 3-4 scans the runqueues in a single pass for select_idle_core()
> and select_idle_cpu() so runqueues are not scanned twice. It's
> a tradeoff because it benefits deep scans but introduces overhead
> for shallow scans.
> Even if patch 3-4 is rejected to allow more time for Aubrey's idle cpu mask
patch 3 looks fine and doesn't collide with Aubrey's work. But I don't
like patch 4 which manipulates different cpumask including
load_balance_mask out of LB and I prefer to wait for v6 of Aubrey's
patchset which should fix the problem of possibly scanning twice busy
cpus in select_idle_core and select_idle_cpu
> approach to stand on its own, patches 1-2 should be fine. The main decision
> with patch 4 is whether select_idle_core() should do a full scan when searching
> for an idle core, whether it should be throttled in some other fashion or
> whether it should be just left alone.
More information about the linux-arm-kernel