[PATCH 1/1] [PATCH v2]sched/pelt: Refine the enqueue_load_avg calculate method

Dietmar Eggemann dietmar.eggemann at arm.com
Thu Apr 14 02:02:36 PDT 2022


On 14/04/2022 03:59, Kuyo Chang wrote:
> From: kuyo chang <kuyo.chang at mediatek.com>

[...]

> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index d4bd299d67ab..159274482c4e 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -3829,10 +3829,12 @@ static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s
>  
>  	se->avg.runnable_sum = se->avg.runnable_avg * divider;
>  
> -	se->avg.load_sum = divider;
> -	if (se_weight(se)) {
> +	se->avg.load_sum = se->avg.load_avg * divider;
> +	if (se_weight(se) < se->avg.load_sum) {
>  		se->avg.load_sum =
> -			div_u64(se->avg.load_avg * se->avg.load_sum, se_weight(se));
> +			div_u64(se->avg.load_sum, se_weight(se));

Seems that this will fit on one line now. No braces needed then.


> +	} else {
> +		se->avg.load_sum = 1;
>  	}
>  
>  	enqueue_load_avg(cfs_rq, se);

Looks like taskgroups are not affected since they get always online
with cpu.shares/weight = 1024 (cgroup v1):

cpu_cgroup_css_online() -> online_fair_sched_group() ->
attach_entity_cfs_rq() -> attach_entity_load_avg()

And reweight_entity() does not have this issue.

Tested with `qemu-system-x86_64 ... cores=64 ... -enable-kvm` and
weight=88761 for nice=0 tasks plus forcing se->avg.load_avg = 1 before
the div_u64() in attach_entity_load_avg().

Tested-by: Dietmar Eggemann <dietmar.eggemann at arm.com>



More information about the linux-arm-kernel mailing list