[PATCH v7 16/22] sched: Defer wakeup in ttwu() for unschedulable frozen tasks

Peter Zijlstra peterz at infradead.org
Thu May 27 07:10:16 PDT 2021


On Tue, May 25, 2021 at 04:14:26PM +0100, Will Deacon wrote:
> diff --git a/kernel/freezer.c b/kernel/freezer.c
> index dc520f01f99d..8f3d950c2a87 100644
> --- a/kernel/freezer.c
> +++ b/kernel/freezer.c
> @@ -11,6 +11,7 @@
>  #include <linux/syscalls.h>
>  #include <linux/freezer.h>
>  #include <linux/kthread.h>
> +#include <linux/mmu_context.h>
>  
>  /* total number of freezing conditions in effect */
>  atomic_t system_freezing_cnt = ATOMIC_INIT(0);
> @@ -146,9 +147,16 @@ bool freeze_task(struct task_struct *p)
>  void __thaw_task(struct task_struct *p)
>  {
>  	unsigned long flags;
> +	const struct cpumask *mask = task_cpu_possible_mask(p);
>  
>  	spin_lock_irqsave(&freezer_lock, flags);
> -	if (frozen(p))
> +	/*
> +	 * Wake up frozen tasks. On asymmetric systems where tasks cannot
> +	 * run on all CPUs, ttwu() may have deferred a wakeup generated
> +	 * before thaw_secondary_cpus() had completed so we generate
> +	 * additional wakeups here for tasks in the PF_FREEZER_SKIP state.
> +	 */
> +	if (frozen(p) || (frozen_or_skipped(p) && mask != cpu_possible_mask))
>  		wake_up_process(p);
>  	spin_unlock_irqrestore(&freezer_lock, flags);
>  }
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 42e2aecf087c..6cb9677d635a 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -3529,6 +3529,19 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
>  	if (!(p->state & state))
>  		goto unlock;
>  
> +#ifdef CONFIG_FREEZER
> +	/*
> +	 * If we're going to wake up a thread which may be frozen, then
> +	 * we can only do so if we have an active CPU which is capable of
> +	 * running it. This may not be the case when resuming from suspend,
> +	 * as the secondary CPUs may not yet be back online. See __thaw_task()
> +	 * for the actual wakeup.
> +	 */
> +	if (unlikely(frozen_or_skipped(p)) &&
> +	    !cpumask_intersects(cpu_active_mask, task_cpu_possible_mask(p)))
> +		goto unlock;
> +#endif
> +
>  	trace_sched_waking(p);
>  
>  	/* We're going to change ->state: */

OK, I really hate this. This is slowing down the very hot wakeup path
for the silly freezer that *never* happens. Let me try and figure out if
there's another option.



More information about the linux-arm-kernel mailing list