[PATCH 1/1] sched/deadline: Fix fair_server runtime calculation formula

Peter Zijlstra peterz at infradead.org
Tue Jun 17 01:55:58 PDT 2025


On Sat, Jun 14, 2025 at 10:04:55AM +0800, Kuyo Chang wrote:
> From: kuyo chang <kuyo.chang at mediatek.com>
> 
> [Symptom]
> The calculation formula for fair_server runtime is based on
> Frequency/CPU scale-invariance.
> This will cause excessive RT latency (expect absolute time).
> 
> [Analysis]
> Consider the following case under a Big.LITTLE architecture:
> 
> Assume the runtime is : 50,000,000 ns, and FIE/CIE as below
> FIE: 100
> CIE:50
> First by FIE, the runtime is scaled to 50,000,000 * 100 >> 10 = 4,882,812
> Then by CIE, it is further scaled to 4,882,812 * 50 >> 10 = 238,418.

What's this FIE/CIE stuff? Is that some ARM lingo?


> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> index ad45a8fea245..8bfa846cf0dc 100644
> --- a/kernel/sched/deadline.c
> +++ b/kernel/sched/deadline.c
> @@ -1504,7 +1504,10 @@ static void update_curr_dl_se(struct rq *rq, struct sched_dl_entity *dl_se, s64
>  	if (dl_entity_is_special(dl_se))
>  		return;
>  
> -	scaled_delta_exec = dl_scaled_delta_exec(rq, dl_se, delta_exec);
> +	if (dl_se == &rq->fair_server)
> +		scaled_delta_exec = delta_exec;
> +	else
> +		scaled_delta_exec = dl_scaled_delta_exec(rq, dl_se, delta_exec);

Juri, the point it a bit moot atm, but is this something specific to the
fair_server in particular, or all servers?

Because if this is something all servers require then the above is
ofcourse wrong.



More information about the linux-arm-kernel mailing list